<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/css" href="http://www.lps.ens.fr/~krauth/skins/common/feed.css"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title>Werner KRAUTH - New pages [en]</title>
		<link>http://www.lps.ens.fr/~krauth/index.php/Special:Newpages</link>
		<description>From Werner KRAUTH</description>
		<language>en</language>
		<generator>MediaWiki 1.6.12</generator>
		<lastBuildDate>Sat, 25 Apr 2026 10:25:53 GMT</lastBuildDate>
		<item>
			<title>Tartero Shiratani Krauth 2026</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Tartero_Shiratani_Krauth_2026</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''G. Tartero, S. Shiratani, W. Krauth''' ''''' Lifting the fog - a case for non-reversible &amp;quot;lifted&amp;quot; Markov chains '''''' ''' arXiv2603.16855   (2026) '''''&lt;br /&gt;
&lt;br /&gt;
'''Abstract'''&lt;br /&gt;
Phase transitions appear all over science, and are familiar from everyday life, as water boiling, sugar melting into caramel or as nematic molecules turning smectic in liquid-crystal displays. The dynamics of phase transitions can be extremely slow, as for example when fog in winter does not lift, that is when the coarsening takes much time from many tiny water droplets to fewer but larger rain drops that feel the pull of gravity. The dynamics of phase transitions is relevant also for the performance of computer algorithms. In the ubiquitous Metropolis Monte Carlo algorithm, the mixing dynamics towards equilibrium leads towards the solution of a sampling problem. It is governed by the same reversibility and detailed-balance principles as the overdamped physical dynamics of fog. For the phase-separated Lennard-Jones system, we describe here how the coarsening dynamics of non-reversible &amp;quot;lifted&amp;quot; variants of the Metropolis algorithm proceeds on much faster time scales, with the microscopic non-reversibility translating into large-scale relative motion of droplets that is impossible under the Ostwald-ripening condition of reversibility. A density-displacement coupling moves droplets relative to each other through a lensing effect. Efficient implementations of the long-range Metropolis algorithm and its non-reversible lifting (event-chain Monte Carlo) allow us to show that, in consequence, the coarsening growth exponent is larger under lifting. For large system sizes, the computing problem is thus solved infinitely faster than before, with the outcome strictly unchanged with respect to the Metropolis algorithm. We also discuss the larger setting of our findings, namely that &amp;quot;lifted&amp;quot; non-reversible algorithms can be set up for generic reversible sampling methods, with applications going much beyond our example of lifting fog. &lt;br /&gt;
&lt;br /&gt;
[http://arxiv.org/pdf/2603.16855    Electronic version (from arXiv)]&lt;/div&gt;</description>
			<pubDate>Thu, 02 Apr 2026 15:56:11 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Tartero_Shiratani_Krauth_2026</comments>		</item>
		<item>
			<title>Sathwik Krauth 2025</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Sathwik_Krauth_2025</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''K. Sathwik, W. Krauth'' ''''' Thermodynamic integration, fermion sign problem, and real-space renormalization'''''' ''' arXiv2511.12794 (2025) '''''&lt;br /&gt;
&lt;br /&gt;
'''Abstract'''&lt;br /&gt;
We reconsider real-space renormalization for the two-dimensional Ising model, following the path traced out by Wilson in Sect. VI of his 1975 Reviews of Modern Physics. In that reference, Wilson considerably extended the Kadanoff decimation procedure towards a possibly rigorous construction of a real-space scale-invariant hamiltonian. Wilson’s construction has, to the best of our knowledge, never been fully understood and&lt;br /&gt;
thus neither reproduced nor generalized. In the present work, we use Monte Carlo sampling in combination with thermodynamic integration in order to retrace Wilson’s computation for a real-space renormalization with a number of terms in the hamiltonian. We elaborate on the connection of real-space renormalization with the fermion sign problem and discuss to which extent our Monte Carlo procedure actually implements Wilson’s&lt;br /&gt;
program from half a century ago.&lt;br /&gt;
&lt;br /&gt;
[http://arxiv.org/pdf/2511.12794  Electronic version (from arXiv)]&lt;/div&gt;</description>
			<pubDate>Tue, 25 Nov 2025 01:15:06 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Sathwik_Krauth_2025</comments>		</item>
		<item>
			<title>Massoulie et al 2025</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Massoulie_et_al_2025</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Brune Massoulié, Clément Erignoux, Cristina Toninelli, Werner Krauth''' '''''Velocity trapping in the lifted TASEP and the true self-avoiding random walk''''' '''  Physical Review Letters 135, 127102 (2025)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Abstract'''&lt;br /&gt;
We discuss non-reversible Markov-chain Monte Carlo algorithms that, for particle systems, rigorously sample the positional Boltzmann distribution and that have faster than physical dynamics. These algorithms all feature a non-thermal velocity distribution. They are exemplified by the lifted TASEP (totally asymmetric simple exclusion process), a one-dimensional lattice reduction of event-chain Monte Carlo. We analyze its dynamics in terms of a velocity trapping that arises from correlations between the local density and the particle velocities. This allows us to formulate a conjecture for its out-of-equilibrium mixing time scale, and to rationalize its equilibrium superdiffusive time scale. Both scales are faster than for the (unlifted) TASEP. They are further justified by our analysis of the lifted TASEP in terms of many-particle realizations of true self-avoiding random walks. We discuss velocity trapping beyond the case of one-dimensional lattice models and in more than one physical dimensions. Possible applications beyond physics are pointed out.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://doi.org/10.1103/mqdr-x95j Published version (subscription needed)]&lt;br /&gt;
&lt;br /&gt;
[http://arxiv.org/pdf/2503.10575 Electronic version (from arXiv)]&lt;/div&gt;</description>
			<pubDate>Sun, 16 Mar 2025 11:51:53 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Massoulie_et_al_2025</comments>		</item>
		<item>
			<title>HMC harmonic.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/HMC_harmonic.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my public lectures on [https://www.physics.ox.ac.uk/events/algorithms-and-computations-theoretical-physics-set-lectures Algorithms and Computations in theoretical physics] at the University of Oxford (see the announcement for a syllabus). For more information, see [[Oxford_Lectures_2025|the main page of the public lectures]]. It presents the hamiltonian Monte Carlo algorithm invented by Duane et al. (1988). For an in-depth discussion, see my below reference. Other programs in the same series are the [[Metropolis_harmonic.py|Metropolis algorithm]] and the [[Levy_harmonic.py|Lévy construction]], both for the harmonic chain. &lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import math, random&lt;br /&gt;
 &lt;br /&gt;
 def U(x):&lt;br /&gt;
     U = 0.0&lt;br /&gt;
     for k in range(N):&lt;br /&gt;
         k_minus = (k - 1) % N&lt;br /&gt;
         x_minus = x[k_minus]&lt;br /&gt;
         if k == 0: x_minus -= L&lt;br /&gt;
         U += (x[k] - x_minus) ** 2 / 2.0&lt;br /&gt;
     return U&lt;br /&gt;
 &lt;br /&gt;
 def grad_U(x, k):&lt;br /&gt;
     k_plus = (k + 1) % N&lt;br /&gt;
     k_minus = (k - 1) % N&lt;br /&gt;
     x_plus = x[k_plus]&lt;br /&gt;
     if k == N - 1: x_plus += L&lt;br /&gt;
     x_minus = x[k_minus]&lt;br /&gt;
     if k == 0: x_minus -= L&lt;br /&gt;
     return 2.0 * x[k] - x_minus - x_plus&lt;br /&gt;
 &lt;br /&gt;
 def LeapFrog(x, p, DeltaTau, I):&lt;br /&gt;
     p = [p[k] - DeltaTau * grad_U(x, k) / 2.0 for k in range(N)]&lt;br /&gt;
     for iter in range(I):&lt;br /&gt;
         x = [x[k] + DeltaTau * p[k] for k in range(N)]&lt;br /&gt;
         if iter != I - 1: p = [p[k] - DeltaTau * grad_U(x, k) for k in range(N)]&lt;br /&gt;
     p = [p[k] - DeltaTau * grad_U(x, k) / 2.0 for k in range(N)]&lt;br /&gt;
     return x, p&lt;br /&gt;
 &lt;br /&gt;
 N = 8&lt;br /&gt;
 L = 2 * N&lt;br /&gt;
 DeltaTau = 0.1&lt;br /&gt;
 I = 16&lt;br /&gt;
 Iter = 100000&lt;br /&gt;
 x = [L / N  * k for k in range(N)]&lt;br /&gt;
 U_old = U(x)&lt;br /&gt;
 for step in range(Iter):&lt;br /&gt;
 #&lt;br /&gt;
 #       Choose momenta and compute kinetic energy&lt;br /&gt;
 #&lt;br /&gt;
     p = [random.gauss(0.0, 1.0) for k in range(N)]&lt;br /&gt;
     K_old = sum([y ** 2 / 2.0 for y in p])&lt;br /&gt;
 #&lt;br /&gt;
 #       Do I sweeps of Leapfrog. CPU time is proportional to (2 * I + 1) * N&lt;br /&gt;
 #&lt;br /&gt;
     x_new, p_new = LeapFrog(x, p, DeltaTau, I)&lt;br /&gt;
 #&lt;br /&gt;
 #       Do the Metropolis step&lt;br /&gt;
 #&lt;br /&gt;
     U_new = U(x_new)&lt;br /&gt;
     K_new = sum([y ** 2 / 2.0 for y in p_new])&lt;br /&gt;
     if random.uniform(0.0, 1.0)  &amp;lt; math.exp(- U_new + U_old - K_new + K_old):&lt;br /&gt;
         x = x_new[:]&lt;br /&gt;
         U_old = U_new&lt;br /&gt;
              &lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Krauth, W. [http://arxiv.org/pdf/2411.11690 Hamiltonian Monte Carlo vs. event-chain Monte Carlo: an appraisal of sampling strategies beyond the diffusive regime]. ArXiv: 2411.11690&lt;br /&gt;
[[Category:Python]] [[Category:Oxford2025]]&lt;/div&gt;</description>
			<pubDate>Wed, 19 Feb 2025 18:18:16 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:HMC_harmonic.py</comments>		</item>
		<item>
			<title>Metropolis harmonic.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Metropolis_harmonic.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my public lectures on [https://www.physics.ox.ac.uk/events/algorithms-and-computations-theoretical-physics-set-lectures Algorithms and Computations in theoretical physics] at the University of Oxford (see the announcement for a syllabus). For more information, see [[Oxford_Lectures_2025|the main page of the public lectures]]. It presents the Metropolis Monte Carlo algorithm invented by Metropolis et al. (1953). For an in-depth discussion, see my below reference. Other programs in the same series are the [[HMC_harmonic.py|Hamiltonian Monte Carlo]] algorithm and the [[Levy_harmonic.py|Lévy construction]], both for the harmonic model.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import math, random&lt;br /&gt;
 &lt;br /&gt;
 def U(x):&lt;br /&gt;
     U = 0.0&lt;br /&gt;
     for k in range(N):&lt;br /&gt;
         k_minus = (k - 1) % N&lt;br /&gt;
         x_minus = x[k_minus]&lt;br /&gt;
         if k == 0: x_minus -= L&lt;br /&gt;
         U += (x[k] - x_minus) ** 2 / 2.0&lt;br /&gt;
     return U&lt;br /&gt;
 &lt;br /&gt;
 N = 8&lt;br /&gt;
 L = 16&lt;br /&gt;
 delta = 1.0&lt;br /&gt;
 x = [L / N * k for k in range(N)]&lt;br /&gt;
 Iter = 1000000&lt;br /&gt;
 for iter in range(Iter):&lt;br /&gt;
     k = random.randint(0, N - 1)                   # random slice&lt;br /&gt;
     k_plus = (k + 1) % N&lt;br /&gt;
     k_minus = (k - 1) % N&lt;br /&gt;
     x_plus = x[k_plus]&lt;br /&gt;
     if k == N - 1: x_plus += L&lt;br /&gt;
     if k_plus == N: x_plus = x[0] + L&lt;br /&gt;
     x_minus = x[k_minus]&lt;br /&gt;
     if k == 0: x_minus -= L&lt;br /&gt;
     x_new = x[k] + random.uniform(-delta, delta)   # new position at slice k&lt;br /&gt;
     U_k = ((x_plus - x[k]) ** 2  + (x[k] - x_minus) ** 2) / 2.0&lt;br /&gt;
     U_kp = ((x_plus - x_new) ** 2  + (x_new - x_minus) ** 2) / 2.0&lt;br /&gt;
     if random.uniform(0.0, 1.0) &amp;lt; math.exp(- (U_kp - U_k)): x[k] = x_new&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Krauth, W. [http://arxiv.org/pdf/2411.11690 Hamiltonian Monte Carlo vs. event-chain Monte Carlo: an appraisal of sampling strategies beyond the diffusive regime]. ArXiv: 2411.11690&lt;br /&gt;
[[Category:Python]] [[Category:Oxford2025]]&lt;/div&gt;</description>
			<pubDate>Wed, 19 Feb 2025 18:13:37 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Metropolis_harmonic.py</comments>		</item>
		<item>
			<title>Levy harmonic.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Levy_harmonic.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my public lectures on [https://www.physics.ox.ac.uk/events/algorithms-and-computations-theoretical-physics-set-lectures Algorithms and Computations in theoretical physics] at the University of Oxford (see the announcement for a syllabus). For more information, see [[Oxford_Lectures_2025|the main page of the public lectures]]. It presents the Lévy construction, in other words the direct sampling algorithm invented by P. Lévy in 1940. For an in-depth discussion, see my below reference. Other programs in the same series are the [[Metropolis_harmonic.py|Metropolis algorithm]] and the [[HMC_harmonic.py|Hamiltonian Monte Carlo algorithm]], both for the harmonic chain.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import math, random&lt;br /&gt;
 &lt;br /&gt;
 def U(x):&lt;br /&gt;
     U = 0.0&lt;br /&gt;
     for k in range(N):&lt;br /&gt;
         k_minus = (k - 1) % N&lt;br /&gt;
         x_minus = x[k_minus]&lt;br /&gt;
         if k == 0: x_minus -= L&lt;br /&gt;
         U += (x[k] - x_minus) ** 2 / 2.0&lt;br /&gt;
     return U&lt;br /&gt;
 &lt;br /&gt;
 N = 8&lt;br /&gt;
 L = 16&lt;br /&gt;
 delta = 1.0&lt;br /&gt;
 &lt;br /&gt;
 Iter = 1000000&lt;br /&gt;
 &lt;br /&gt;
 for iter in range(Iter):&lt;br /&gt;
     xstart = random.uniform(0.0, L)&lt;br /&gt;
     xend = xstart + L&lt;br /&gt;
     x = [xstart]&lt;br /&gt;
     for k in range(1, N):        # loop over internal slices&lt;br /&gt;
         x_mean = ((N - k) * x[k - 1] + xend) / (1.0 + N - k)&lt;br /&gt;
         sigma = math.sqrt(1.0 / (1.0  + 1.0 / (N - k) ))&lt;br /&gt;
         x.append(random.gauss(x_mean, sigma))&lt;br /&gt;
     x.append(xend)&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Krauth, W. [http://arxiv.org/pdf/2411.11690 Hamiltonian Monte Carlo vs. event-chain Monte Carlo: an appraisal of sampling strategies beyond the diffusive regime]. ArXiv: 2411.11690&lt;br /&gt;
[[Category:Python]] [[Category:Oxford2025]]&lt;/div&gt;</description>
			<pubDate>Wed, 19 Feb 2025 18:11:05 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Levy_harmonic.py</comments>		</item>
		<item>
			<title>Oxford Lectures 2025</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Oxford_Lectures_2025</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;My [https://www.physics.ox.ac.uk/events/algorithms-and-computations-theoretical-physics-set-lectures Public Lectures at the University of Oxford (UK)], entitled '''Algorithms and computations in theoretical physics''', ran from 21 January 2025 through 11 March 2025 and are repeated (and adapted) from 20 January 2026 through 10 March 2026.&lt;br /&gt;
&lt;br /&gt;
===Lecture 1: 20 January 2026 (21 January 2025)===&lt;br /&gt;
We start our parallel exploration of physics and of computing with the concept of sampling, the process of producing examples (“samples”) of a probability distribution. In week 1, we consider “direct” sampling (the examples are obtained directly) and, among the many connections to physics, will come across the Maxwell distribution. In 1859, it marked the beginning of the field of statistical physics.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/b/b4/WK_Lecture1_Oxford2025.pdf Here are the  notes for the first lecture (20 January 2026 (21 January 2025)).  ]&lt;br /&gt;
&lt;br /&gt;
===Lecture 2: 27 January 2026 (28 January 2025)===&lt;br /&gt;
We start with a Special Topic A, where we supplement the first Lecture with two all-important special topics that&lt;br /&gt;
explore the meaning of convergence in statistics, and the fundamental usefulness&lt;br /&gt;
of statistical reasoning. We can discuss them here, in the direct-sampling&lt;br /&gt;
framework, but they are more generally relevant. The strong law of large&lt;br /&gt;
numbers, for example, will turn into the famous ergodic theorem for Markov&lt;br /&gt;
chains.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/6/69/WK_Lecture2Spec_Oxford2025.pdf Here are the  notes for Special Topic A (28 January 2025).  ]&lt;br /&gt;
&lt;br /&gt;
Then, in this second lecture proper, we consider Markov-chain sampling, from the adults' game&lt;br /&gt;
on the Monte Carlo heliport to modern ideas on non-reversibility. We discuss&lt;br /&gt;
a lot of theory, but also six ten-line pseudo-code algorithms, none of them&lt;br /&gt;
approximate, and all of them as intricate as they are short.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/9/9a/WK_Lecture2_Oxford2025.pdf Here are the notes for Lecture 2.]&lt;br /&gt;
&lt;br /&gt;
===Lecture 3: 03 February 2025 (04 February 2025)===&lt;br /&gt;
In this third lecture, we consider Markov-chain sampling in an abstract setting&lt;br /&gt;
in one dimension. We discuss some theory, but also seven ten-line pseudo-code&lt;br /&gt;
algorithms, none of them approximate, and all of them as intricate as they are short.&lt;br /&gt;
At the end, we discuss the foundations of statistical mechanics, as seen in a one-&lt;br /&gt;
dimensional example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/8/80/WK_Lecture3_Oxford2025.pdf Here are the notes for Lecture 3.]&lt;br /&gt;
&lt;br /&gt;
* Here is [[Metropolis X2X4.py|the Metropolis algorithm for a single particle in an anharmonic potential]]&lt;br /&gt;
* Here is [[Factor Metropolis X2X4.py|the factorized Metropolis algorithm for a single particle in an anharmonic potential]]&lt;br /&gt;
* Here is [[ZigZag X2X4.py|the Zig-zag algorithm for a single particle in an anharmonic potential]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Lecture 4: 10 February 2026 (17 February 2025)===&lt;br /&gt;
In this fourth lecture, we treat classical many-particle systems that interact with hard-sphere&lt;br /&gt;
potentials, a case of importance for the foundations of statistical mechanics but also for its applications.  We move from Newtonian mechanics to Boltzmann mechanics and from classical mechanics to statistical mechanics, in a way that is again entirely&lt;br /&gt;
example-based. We then treat the case of one-dimensional hard spheres and discover the Asakura--Oosawa interaction, the fifth force in nature. We will end with a discussion of the double nature of pressure: kinematic and thermodynamic.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/7/72/WK_Lecture4_Oxford2025.pdf Here are the notes for Lecture 4.]&lt;br /&gt;
&lt;br /&gt;
* Here is [[Event_disks_box.py| the event-driven algorithm for four hard disks in a square box]], in a few dozen lines of code.&lt;br /&gt;
&lt;br /&gt;
===Lecture 5: 17 February 2026 (18 February 2025)===&lt;br /&gt;
In this fifth lecture, we treat classical many-particle systems that interact with general interactions, in the example of the harmonic chain. &lt;br /&gt;
&lt;br /&gt;
Lecture notes for lecture 5 are forthcoming, but most of the material can be found [http://arxiv.org/pdf/2411.11690 here].&lt;br /&gt;
&lt;br /&gt;
* Here is [[Levy_harmonic.py| the Lévy-construction algorithm]] for the harmonic chain, in nine lines of code. It directly samples configurations.&lt;br /&gt;
* Here is [[Metropolis_harmonic.py| the Metropolis algorithm]] for the harmonic chain, in two dozen lines of code. It samples configurations using the algorithm developed by Metropolis et al. in (1953).&lt;br /&gt;
* Here is [[HMC_harmonic.py| the Hamiltonian Monte Carlo algorithm]] for the harmonic chain, in three dozen lines of code. It implements the algorithm developed by Duane et al. (1988).&lt;br /&gt;
&lt;br /&gt;
===Lecture 6: 24 February 2026 (25 February 2025)===&lt;br /&gt;
This sixth lecture is the first of two lectures on quantum statistical mechanics. Starting from the quantum harmonic oscillator, we introduce to density matrices and&lt;br /&gt;
the Feynman path integral. The computational techniques that we concentrate on during these two weeks have widespread use in statistics and physics,&lt;br /&gt;
and the fundamental matrix-squaring property of density matrices corresponds to the convolution of probability densities. The Lévy algorithm for the sampling of&lt;br /&gt;
path integrals, on the other hand, is mathematically equivalent to what we discussed, in a totally different context, in Lecture 5.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/d/d3/WK_Lecture6_Oxford2025.pdf Here are the notes for Lecture 6.]&lt;br /&gt;
&lt;br /&gt;
===Lecture 7: 03 March 2026 (04 March 2025)===&lt;br /&gt;
This is the second of two lectures on computational quantum statistical mechanics. Building on the quantum harmonic oscillator, the density matrices and the Feynman path integral introduced in the sixth lecture, we will discuss a direct-sampling quantum Monte Carlo algorithm for bosons that illustrates the phenomenon of Bose–Einstein condensation. Before doing that, however, we must discuss quantum statistics of indistinguishable particles and set up a naive program against which to check our state-of-the-art simulation code.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/f/f5/WK_Lecture7_Oxford2025.pdf Here are the notes for Lecture 7.]&lt;br /&gt;
&lt;br /&gt;
* Here is [[Two_cycles.py| Two-cycles.py]], an intricate seven-line algorithm for sampling random permutations... but only those without cycles longer than 2! &lt;br /&gt;
* Here is [[Direct_N_bosons.py| Direct-bosons.py,]] a three-dozen-line direct-sampling algorithm for ideal bosons in a three-dimensional harmonic trap.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Lecture 8: 10 March 2026 (11 March 2025)===&lt;br /&gt;
The final lecture in the set is on the classical Ising model. We go from basic (and not so basic) enumerations to the high-temperature expansion of the Ising model, touch on its exact solution by Kac and Ward (1952) and then treat sampling methods: Metropolis, heatbath, and perfect sampling, to finish with the Wolff (1989) cluster algorithm.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/5/52/WK_Lecture8_Oxford2025.pdf Here are the notes for Lecture 8.]&lt;br /&gt;
&lt;br /&gt;
* Here is [[Enumerate ising dos.py| Enumerate-ising.py]] that uses the Gray code enumeration to compute the density of states of the Ising model.&lt;br /&gt;
* Here is [[Markov_ising.py| Markov-ising.py]] that uses the Metropolis algorithm to sample the Boltzmann distribution of the Ising model.&lt;br /&gt;
* Here is [[Heatbath_ising.py| Heatbath-ising.py]] that uses the Heatbath algorithm for the above sampling problem.&lt;br /&gt;
* Here is [[Cluster_ising.py| CLuster-ising.py]] that uses the Wolff cluster algorithm, again for sampling the Boltzmann distribution of the Ising model.&lt;/div&gt;</description>
			<pubDate>Tue, 21 Jan 2025 13:01:59 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Oxford_Lectures_2025</comments>		</item>
		<item>
			<title>Krauth 2024 HMC ECMC</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Krauth_2024_HMC_ECMC</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''W. Krauth'' '''''Hamiltonian Monte Carlo vs. event-chain Monte Carlo: an appraisal of sampling strategies beyond the diffusive regime '''''&lt;br /&gt;
arXiv:2411.11690 (2024)&lt;br /&gt;
&lt;br /&gt;
'''Abstract'''&lt;br /&gt;
We discuss Hamiltonian Monte Carlo (HMC) and event-chain Monte Carlo (ECMC)&lt;br /&gt;
for the one-dimensional chain of particles with harmonic interactions and benchmark them&lt;br /&gt;
against local reversible Metropolis algorithms. While HMC achieves considerable speedup&lt;br /&gt;
with respect to local reversible Monte Carlo algorithms, its autocorrelation functions of&lt;br /&gt;
global observables such as the structure factor have slower scaling with system size than for&lt;br /&gt;
ECMC, a lifted non-reversible Markov chain. This can be traced to the dependence of ECMC&lt;br /&gt;
on a parameter of the harmonic energy, the equilibrium distance, which drops out when&lt;br /&gt;
energy differences or gradients are evaluated. We review the recent literature and provide&lt;br /&gt;
pseudocodes and Python programs. We finally discuss related models and generalizations&lt;br /&gt;
beyond one-dimensional particle systems.&lt;br /&gt;
&lt;br /&gt;
[http://arxiv.org/pdf/2411.11690  Electronic version (from arXiv)]&lt;/div&gt;</description>
			<pubDate>Tue, 26 Nov 2024 00:11:18 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Krauth_2024_HMC_ECMC</comments>		</item>
		<item>
			<title>Hukushima Krauth 2024</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Hukushima_Krauth_2024</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''K. Hukushima, W. Krauth''' '''''Damage spreading and coupling in spin glasses and hard spheres ''''' '''Frontiers in Physics 12, 1507250 (2025)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Abstract'''&lt;br /&gt;
We study the connection between damage spreading, a phenomenon long discussed in the physics literature, and the coupling of Markov chains, a technique used to bound the mixing time. We discuss in parallel the Edwards-Anderson spin glass model and the hard-disk system, focusing on how coupling provides insights into the region of fast coupling within the paramagnetic and liquid phases. We also work out the connection between path coupling and damage spreading. Numerically, the scaling analysis of the mean coupling time determines a critical point between fast and slow couplings. The exact relationship between fast coupling and disordered phases has not been established rigorously, but we suggest that it will ultimately enhance our understanding of phase behavior in disordered systems. &lt;br /&gt;
&lt;br /&gt;
[http://arxiv.org/pdf/2410.05544 Electronic version (from arXiv)]&lt;br /&gt;
&lt;br /&gt;
[https://doi.org/10.3389/fphy.2024.1507250 Published version (open access)]&lt;/div&gt;</description>
			<pubDate>Thu, 10 Oct 2024 20:25:14 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Hukushima_Krauth_2024</comments>		</item>
		<item>
			<title>Essler Krauth 2023</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Essler_Krauth_2023</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''F. H. L Essler, W. Krauth''' '''''Lifted TASEP: a Bethe ansatz integrable paradigm for non-reversible Markov chains ''''' ''' Phys. Rev. X 14, 041035'''&lt;br /&gt;
&lt;br /&gt;
'''Popular Summary'''&lt;br /&gt;
Markov-chain Monte Carlo (MCMC) algorithms formulate the sampling problem for complex probability distributions as a simulation of fictitious physical systems in equilibrium, where all motion is diffusive and time reversible. But nonreversible algorithms can, in principle, sample distributions much more efficiently. In recent years, a class of “lifted” Markov chains has implemented this idea in practice, but the resulting algorithms are extremely difficult to analyze. In this work, we introduce an exactly solvable paradigm for nonreversible Markov chains.&lt;br /&gt;
&lt;br /&gt;
Our paradigm, which we term the lifted totally asymmetric simple exclusion process (TASEP), describes a particular type of nonreversible dynamics for particles on a one-dimensional lattice. We show that this dynamics allows for polynomial speedups in particle number compared to the famous Metropolis MCMC algorithm. The lifted-TASEP dynamics is, in fact, faster than that of any other known class of models. To arrive at our conclusions, we combine exact methods from the theory of integrable models with extensive numerical simulations. In particular, we prove that the lifted TASEP is integrable and determine the scaling of its relaxation and mixing times with system size.&lt;br /&gt;
&lt;br /&gt;
Our work opens the door to obtaining mathematically rigorous results for speedups of nonreversible MCMC algorithms, and more generally, of lifted Markov chains arising in interacting many-particle systems.&lt;br /&gt;
&lt;br /&gt;
[http://arxiv.org/pdf/2306.13059 Electronic version (from arXiv)]&lt;br /&gt;
&lt;br /&gt;
[https://doi.org/10.1103/PhysRevX.14.041035 Published version (open source)]&lt;br /&gt;
&lt;br /&gt;
==Further context==&lt;br /&gt;
The Lifted TASEP is discussed in my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot;, and a sample Python program can be found [[LiftedTASEPCompact.py|here]].&lt;br /&gt;
&lt;br /&gt;
My [[Massoulie_et_al_2025|2025 paper with B. Massoulié, C. Erignoux and C. Toninelli]] provides a deeper theoretical analysis.&lt;/div&gt;</description>
			<pubDate>Thu, 18 Jul 2024 15:10:05 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Essler_Krauth_2023</comments>		</item>
		<item>
			<title>Hard spheres coupling.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Hard_spheres_coupling.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 &lt;br /&gt;
 N = 4&lt;br /&gt;
 L1 = [[0.2, 0.2], [0.7, 0.2], [0.2, 0.7], [0.7, 0.7]] # replace by 1000 valid&lt;br /&gt;
 L2 = [[0.3, 0.3], [0.8, 0.3], [0.3, 0.8], [0.8, 0.8]] # random configurations&lt;br /&gt;
                                                       # of N particles&lt;br /&gt;
 sigma = 0.1&lt;br /&gt;
 sigma_sq = sigma ** 2&lt;br /&gt;
 delta = 0.1&lt;br /&gt;
 iter = 0&lt;br /&gt;
 while True:&lt;br /&gt;
     iter += 1&lt;br /&gt;
     k = random.randint(0, N - 1)&lt;br /&gt;
     b = [random.uniform(delta, 1.0 - delta), random.uniform(delta, 1.0 - delta)]&lt;br /&gt;
     a = L1[k]&lt;br /&gt;
     min_dist = min((b[0] - c[0]) ** 2 + (b[1] - c[1]) ** 2 for c in L1 if c != a)&lt;br /&gt;
     box_cond = min(b[0], b[1]) &amp;lt; sigma or max(b[0], b[1]) &amp;gt; 1.0 - sigma&lt;br /&gt;
     if not (box_cond or min_dist &amp;lt; 4.0 * sigma ** 2):&lt;br /&gt;
         a[:] = b&lt;br /&gt;
     a = L2[k]&lt;br /&gt;
     min_dist = min((b[0] - c[0]) ** 2 + (b[1] - c[1]) ** 2 for c in L2 if c != a)&lt;br /&gt;
     box_cond = min(b[0], b[1]) &amp;lt; sigma or max(b[0], b[1]) &amp;gt; 1.0 - sigma&lt;br /&gt;
     if not (box_cond or min_dist &amp;lt; 4.0 * sigma ** 2):&lt;br /&gt;
         a[:] = b&lt;br /&gt;
     print(L1)&lt;br /&gt;
     print(L2)&lt;br /&gt;
     print()&lt;br /&gt;
     if L1 == L2:&lt;br /&gt;
         print(iter)&lt;br /&gt;
         break&lt;/div&gt;</description>
			<pubDate>Wed, 12 Jun 2024 21:52:42 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Hard_spheres_coupling.py</comments>		</item>
		<item>
			<title>Ising coupling.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Ising_coupling.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
The present program illustrates the coupling and the phenomenon of monotone coupling in Markov-chain sampling in the example of the two-dimensional Ising model on an LxL square lattice with periodic boundary conditions. We start, at time t=0 with two configurations, one called SLow (all spins equal to -1) and another one, called SHigh (all spins equal to +1), and runs the same Markov chain based on the heat-bath algorithm on both of them until the resulting configurations coincide. &lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
  &lt;br /&gt;
 L = 64; N = L * L&lt;br /&gt;
 nbr = {i : ((i // L) * L + (i + 1) % L, (i + L) % N,&lt;br /&gt;
              (i // L) * L + (i - 1) % L, (i - L) % N) \&lt;br /&gt;
                                     for i in range(N)}&lt;br /&gt;
 beta = 0.4  #  beta = 0.4407 is critical temperature&lt;br /&gt;
 SLow = [-1 for site in range(N)]&lt;br /&gt;
 SHigh = [1 for site in range(N)]&lt;br /&gt;
 iter = 0&lt;br /&gt;
 while True:&lt;br /&gt;
     iter += 1&lt;br /&gt;
     k = random.randint(0, N - 1)&lt;br /&gt;
     Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
     hLow = sum(SLow[nn] for nn in nbr[k])&lt;br /&gt;
     hHigh = sum(SHigh[nn] for nn in nbr[k])&lt;br /&gt;
     SLow[k] = -1&lt;br /&gt;
     if Upsilon &amp;lt; 1.0 / (1.0 + math.exp(-2.0 * beta * hLow)):&lt;br /&gt;
         SLow[k] = 1&lt;br /&gt;
     SHigh[k] = -1&lt;br /&gt;
     if Upsilon &amp;lt; 1.0 / (1.0 + math.exp(-2.0 * beta * hHigh)):&lt;br /&gt;
         SHigh[k] = 1&lt;br /&gt;
     if SHigh == SLow:&lt;br /&gt;
         print(iter / N)&lt;br /&gt;
         print(SLow, SHigh)&lt;br /&gt;
         break&lt;br /&gt;
Output of the above program (mean coupling time / (N^3 logN)&lt;br /&gt;
&lt;br /&gt;
  N t_coup / N^3 / log N&lt;br /&gt;
  8 1.2698&lt;br /&gt;
 16 1.1993&lt;br /&gt;
 32 1.1190&lt;br /&gt;
 64 1.1028&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
This program can easily be adapted to the coupling-from-the-past approach to Markov-chain sampling introduced by Propp and Wilson in 1996, and then it allows one to obtain direct samples of the Boltzmann distribution. It uses the concept of monotone coupling.  When SLow and SHigh have coupled, ALL 2^N configurations have coupled also. This algorithm is also explained in Chapter 5 of my book (see below)&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Propp J., Wilson D., Random Struct. Algorithms 9, 223 (1996)&lt;br /&gt;
* Krauth, W., Statistical Mechanics: Algorithms and Computations (Oxford University Press, 2006)  (Chapter 5)&lt;/div&gt;</description>
			<pubDate>Wed, 12 Jun 2024 20:56:14 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Ising_coupling.py</comments>		</item>
		<item>
			<title>Stopping circle.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Stopping_circle.py</link>
			<description>&lt;p&gt;Summary: /* Further Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
In Lecture 5 of these lectures, we continue our exploration of Markov-chain Monte Carlo algorithms that sample perfectly from a given distribution. In Lecture 2, we already encountered beautiful such algorithms for [[Top to random simul stop.py | card shuffling]] and for [[particle diffusion on a path graph| Diffusion_CFTP.py]], and in the present Lecture 5, we will encounter more general perfect-sampling algorithms for hard spheres and for the Ising model. &lt;br /&gt;
&lt;br /&gt;
All this is nice, but isn't there a simple perfect-sampling algorithm that one can actually understand, say, for a single particle on a one-dimensional graph?? Here it is, for the a particle on an N-cycle (a path graph of N+1 vertices 0, 1, 2, ..., N, with periodic boundary conditions, where we identify vertex N with vertex 0): Start at vertex 0: with probability 1/N: output this vertex. Otherwise, perform a random walk, moving with probability 1/2 to the left and with probability 1/2 to the right. Keep track of the not-visited vertices. Output the final non-visited vertex. The vertex that are output are equally distributed.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
What was explained above is more easily expressed in terms of a tiny Python program:&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 N_trials = 100000&lt;br /&gt;
 N = 6&lt;br /&gt;
 data = [0] * N&lt;br /&gt;
 for iter in range(N_trials):&lt;br /&gt;
     x = 0&lt;br /&gt;
     if random.uniform(0.0, 1.0) &amp;lt; 1.0 / N:&lt;br /&gt;
         data[x] += 1.0 / N_trials&lt;br /&gt;
     else:&lt;br /&gt;
         NotVisited = set([k for k in range(N)])&lt;br /&gt;
         NotVisited.discard(x)&lt;br /&gt;
         while len(NotVisited) &amp;gt; 0:&lt;br /&gt;
             sigma = random.choice([-1, 1])&lt;br /&gt;
             x = (x + sigma) % N&lt;br /&gt;
             NotVisited.discard(x)&lt;br /&gt;
         data[x] += 1.0 / N_trials&lt;br /&gt;
 print('stopping samples')&lt;br /&gt;
 for k in range(N):&lt;br /&gt;
     print('site = ', k,' probability = ', data[k])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Output==&lt;br /&gt;
&lt;br /&gt;
Here is output of the above Python program for N=6. We do 100,000 trials, and see that the six vertices are output with equal probability. &lt;br /&gt;
&lt;br /&gt;
 site =  0  probability =  0.1673&lt;br /&gt;
 site =  1  probability =  0.1671&lt;br /&gt;
 site =  2  probability =  0.1645&lt;br /&gt;
 site =  3  probability =  0.1657&lt;br /&gt;
 site =  4  probability =  0.1656&lt;br /&gt;
 site =  5  probability =  0.1696&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
* What works like a charm for the random walk on the cycle fails for all other graphs except the complete graph (see Lovász and Winkler (1993)).&lt;br /&gt;
* The [[Diffusion CFTP.py |coupling from the past approach]] discussed in Lecture 2 is much more robust.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
* Lovász, L., Winkler, P., On the last new vertex visited by a random walk, J. Graph Theory 17, 593 (1993)&lt;br /&gt;
* Levin, D. A., Peres, Y. &amp;amp; Wilmer, E. L. Markov Chains and Mixing Times (American Mathematical Society, 2008)&lt;/div&gt;</description>
			<pubDate>Wed, 12 Jun 2024 20:10:04 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Stopping_circle.py</comments>		</item>
		<item>
			<title>SSEPCoupling.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/SSEPCoupling.py</link>
			<description>&lt;p&gt;Summary: /* Further information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
The present program illustrates the coupling and, in particular, the phenomenon of monotone coupling in Markov chains in the example of the Symmetric Simple exclusion Process (SSEP) of NPart particles on the path graph of NSites sites without periodic boundary conditions. We start, at time t=0 with two configurations, one called ConfLow and another one, called ConfHigh, and runs the same Markov chain on both of them until they resulting configurations coincide (in both configurations, at each time step, we attempt to move the same particle in the same direction). In the output of our program, we check that this coupling time scales as N^3 log N. NB: There are no periodic boundary conditions, but to simplify the program, we use &amp;quot;phantom&amp;quot; vertex &amp;quot;-1&amp;quot; (containing &amp;quot;phantom particle &amp;quot;-1&amp;quot;) and phantum vertex &amp;quot;NSites&amp;quot;, with phantom particle &amp;quot;NPart&amp;quot;. Only particles 0 to NPart-1, which can live on vertices 0,..., NSites-1, are &amp;quot;real&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 Ntrials = 100&lt;br /&gt;
 &lt;br /&gt;
 for NPart in [8, 16, 32, 64]:&lt;br /&gt;
     NSites = 2 * NPart&lt;br /&gt;
     Coupling  = []&lt;br /&gt;
     for Iter in range(Ntrials):&lt;br /&gt;
         ConfLow = {-1: -1, NPart: NSites}; ConfHigh = {-1: -1, NPart: NSites}&lt;br /&gt;
         for k in range(NPart):&lt;br /&gt;
             ConfLow[k] = k&lt;br /&gt;
             ConfHigh[NPart - 1 - k] = NSites - 1 - k&lt;br /&gt;
         iter = 0&lt;br /&gt;
         while True:&lt;br /&gt;
             iter += 1&lt;br /&gt;
             Active = random.randint (0, NPart - 1)&lt;br /&gt;
             sigma = random.choice([-1, 1])&lt;br /&gt;
             if ConfLow[Active + sigma] != ConfLow[Active] + sigma: ConfLow[Active] += sigma&lt;br /&gt;
             if ConfHigh[Active + sigma] != ConfHigh[Active] + sigma: ConfHigh[Active] += sigma&lt;br /&gt;
             CLow = [ConfLow[k] for k in range(NPart)]&lt;br /&gt;
             CHigh = [ConfHigh[k] for k in range(NPart)]&lt;br /&gt;
             for k in range(NPart):&lt;br /&gt;
                 if CLow[k]&amp;gt; CHigh[k]: print(Error)&lt;br /&gt;
             if ConfLow == ConfHigh:&lt;br /&gt;
                 Coupling.append(iter / NPart ** 3 / math.log(NPart))&lt;br /&gt;
                 break&lt;br /&gt;
     print(NPart, sum(Coupling) / len(Coupling))&lt;br /&gt;
&lt;br /&gt;
Output of the above program (mean coupling time / (N^3 logN)&lt;br /&gt;
&lt;br /&gt;
  N t_coup / N^3 / log N&lt;br /&gt;
  8 1.2698&lt;br /&gt;
 16 1.1993&lt;br /&gt;
 32 1.1190&lt;br /&gt;
 64 1.1028&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
This program can easily be adapted to the coupling-from-the-past approach to Markov-chain sampling introduced by Propp and Wilson in 1996, and then it allows one to obtain perfect samples of the Boltzmann distribution. It uses the concept of monotone coupling.  When ConfLow and ConfHigh have coupled, ALL configurations have coupled also.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Propp J., Wilson D., Random Struct. Algorithms 9, 223 (1996)&lt;/div&gt;</description>
			<pubDate>Wed, 12 Jun 2024 19:59:51 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:SSEPCoupling.py</comments>		</item>
		<item>
			<title>Factor ZigZag X2X4.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Factor_ZigZag_X2X4.py</link>
			<description>&lt;p&gt;Summary: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 def u(x):&lt;br /&gt;
     return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 time_ev = 0.0&lt;br /&gt;
 sigma = 1 if x &amp;lt;= 0.0 else -1&lt;br /&gt;
 &lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 6&lt;br /&gt;
 while len(data) &amp;lt; n_samples:&lt;br /&gt;
     delta_u2 = -math.log(random.uniform(0.0, 1.0))&lt;br /&gt;
     delta_u4 = -math.log(random.uniform(0.0, 1.0))&lt;br /&gt;
     new_x2 = math.sqrt(2.0 * delta_u2)&lt;br /&gt;
     new_x4 = (4.0 * delta_u4) ** 0.25&lt;br /&gt;
     new_x = sigma * min(abs(new_x2), abs(new_x4))&lt;br /&gt;
     new_time_ev = time_ev + abs(new_x - x)&lt;br /&gt;
     for t in range(math.ceil(time_ev), math.floor(new_time_ev) + 1):&lt;br /&gt;
         data.append(x + sigma * (t - time_ev))&lt;br /&gt;
     x = new_x&lt;br /&gt;
     time_ev = new_time_ev&lt;br /&gt;
     sigma *= -1&lt;br /&gt;
 plt.title('Factor-Zig-Zag algorithm, anharmonic oscillator')&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True, label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
==Further information==&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W. Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136]&lt;/div&gt;</description>
			<pubDate>Tue, 11 Jun 2024 06:50:49 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Factor_ZigZag_X2X4.py</comments>		</item>
		<item>
			<title>Bounded Lifted Metropolis X2X4.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Bounded_Lifted_Metropolis_X2X4.py</link>
			<description>&lt;p&gt;Summary: /* Python program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 def u(x):&lt;br /&gt;
     return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 def u_bound(pos):&lt;br /&gt;
     floor = math.floor(abs(pos))&lt;br /&gt;
     ceiling = math.ceil(abs(pos))&lt;br /&gt;
     u_floor = 0&lt;br /&gt;
     for n in range(floor + 1):&lt;br /&gt;
         u_floor += n + n ** 3&lt;br /&gt;
     u_ceiling = u_floor + ceiling + ceiling ** 3&lt;br /&gt;
     u_pos = u_floor&lt;br /&gt;
     if floor != abs(pos):&lt;br /&gt;
         m = (u_ceiling - u_floor) / (ceiling - floor)&lt;br /&gt;
         u_pos = m * (abs(pos) - floor) + u_floor&lt;br /&gt;
     return u_pos&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 delta = 0.1&lt;br /&gt;
 sigma = random.choice([-1, 1])&lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 6&lt;br /&gt;
 for i in range(n_samples):&lt;br /&gt;
     new_x = x + sigma * random.uniform(0.0, delta)&lt;br /&gt;
     delta_u = u(new_x) - u(x)&lt;br /&gt;
     delta_u_tilde = u_bound(new_x) - u_bound(x)&lt;br /&gt;
     if random.uniform(0.0, 1.0) &amp;lt; min(1.0, math.exp(-delta_u_tilde)):&lt;br /&gt;
         x = new_x&lt;br /&gt;
     else:&lt;br /&gt;
         if random.uniform(0.0, 1.0) &amp;gt; (1.0 - math.exp(-delta_u)) / (1.0 - math.exp(-delta_u_tilde)):&lt;br /&gt;
             x = new_x&lt;br /&gt;
         else:&lt;br /&gt;
             sigma *= -1&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 &lt;br /&gt;
 plt.title('Bounded-Lifted Metropolis algorithm, anharmonic oscillator')&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True, label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
This program takes a [[Bernoulli two pebbles patch.py| two-pebble decision]] that we already encountered as an &amp;quot;advanced&amp;quot; algorithm to sample the Bernoulli distribution.&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W. Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136]&lt;/div&gt;</description>
			<pubDate>Tue, 11 Jun 2024 06:39:55 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Bounded_Lifted_Metropolis_X2X4.py</comments>		</item>
		<item>
			<title>Factor Metropolis X2X4 patch.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Factor_Metropolis_X2X4_patch.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 def u(x): return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 def u2(pos): return 0.5 * pos ** 2&lt;br /&gt;
 &lt;br /&gt;
 def u4(pos): return 0.25 * pos ** 4&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 delta = 0.1&lt;br /&gt;
 &lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 6&lt;br /&gt;
 for i in range(n_samples):&lt;br /&gt;
     new_x = x + random.uniform(-delta, delta)&lt;br /&gt;
     delta_u2 = u2(new_x) - u2(x)&lt;br /&gt;
     delta_u4 = u4(new_x) - u4(x)&lt;br /&gt;
     Upsilon2 = random.uniform(0.0, 1.0)&lt;br /&gt;
     Upsilon4 = random.uniform(0.0, 1.0)&lt;br /&gt;
     if Upsilon2 &amp;lt; math.exp(-delta_u2) and Upsilon4 &amp;lt; math.exp(-delta_u4):&lt;br /&gt;
         x = new_x&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 plt.title('Factorized Metropolis algorithm (patch), anharmonic oscillator' )&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True,label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W. Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 23:00:54 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Factor_Metropolis_X2X4_patch.py</comments>		</item>
		<item>
			<title>ZigZag X2X4.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/ZigZag_X2X4.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 def u(x): return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 time_ev = 0.0&lt;br /&gt;
 sigma = 1 if x &amp;lt;= 0.0 else -1&lt;br /&gt;
 &lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 6&lt;br /&gt;
 while len(data) &amp;lt; n_samples:&lt;br /&gt;
     delta_u = -math.log(random.random())&lt;br /&gt;
     new_x = sigma * math.sqrt(-1.0 + math.sqrt(1.0 + 4.0 * delta_u))&lt;br /&gt;
     new_time_ev = time_ev + abs(new_x - x)&lt;br /&gt;
     for t in range(math.ceil(time_ev), math.floor(new_time_ev) + 1):&lt;br /&gt;
         data.append(x + sigma * (t - time_ev))&lt;br /&gt;
     x = new_x&lt;br /&gt;
     time_ev = new_time_ev&lt;br /&gt;
     sigma *= -1&lt;br /&gt;
 &lt;br /&gt;
 plt.title('Zig-Zag algorithm, anharmonic oscillator')&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True, label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W. Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136]&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 22:49:10 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:ZigZag_X2X4.py</comments>		</item>
		<item>
			<title>Lifted Metropolis X2X4.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Lifted_Metropolis_X2X4.py</link>
			<description>&lt;p&gt;Summary: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 def u(x):&lt;br /&gt;
     return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 delta = 0.1&lt;br /&gt;
 sigma = random.choice([-1, 1])&lt;br /&gt;
 &lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 6&lt;br /&gt;
 for i in range(n_samples):&lt;br /&gt;
     new_x = x + sigma * random.uniform(0.0, delta)&lt;br /&gt;
     delta_u = u(new_x) - u(x)&lt;br /&gt;
     if random.random() &amp;lt; math.exp(-delta_u): x = new_x&lt;br /&gt;
     else: sigma *= -1&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 &lt;br /&gt;
 plt.title('Lifted Metropolis algorithm, anharmonic oscillator' )&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True,label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W. Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136]&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 22:43:48 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Lifted_Metropolis_X2X4.py</comments>		</item>
		<item>
			<title>Factor Metropolis X2X4.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Factor_Metropolis_X2X4.py</link>
			<description>&lt;p&gt;Summary: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 def u(x): return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 def u2(pos): return 0.5 * pos ** 2&lt;br /&gt;
 &lt;br /&gt;
 def u4(pos): return 0.25 * pos ** 4&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 delta = 0.1&lt;br /&gt;
 &lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 7&lt;br /&gt;
 for i in range(n_samples):&lt;br /&gt;
     new_x = x + random.uniform(-delta, delta)&lt;br /&gt;
     delta_u2 = u2(new_x) - u2(x)&lt;br /&gt;
     delta_u4 = u4(new_x) - u4(x)&lt;br /&gt;
     if random.random() &amp;lt; min(1.0, math.exp(-delta_u2)) \&lt;br /&gt;
     * min(1.0, math.exp(-delta_u4)): x = new_x&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 &lt;br /&gt;
 plt.title('Factorized Metropolis algorithm, anharmonic oscillator' )&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True,label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W., Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136]&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 22:42:04 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Factor_Metropolis_X2X4.py</comments>		</item>
		<item>
			<title>Metropolis X2X4.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Metropolis_X2X4.py</link>
			<description>&lt;p&gt;Summary: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 def u(x):&lt;br /&gt;
     return x ** 2 / 2.0 + x ** 4 / 4.0&lt;br /&gt;
 &lt;br /&gt;
 x = 0.0&lt;br /&gt;
 delta = 0.1&lt;br /&gt;
 &lt;br /&gt;
 data = []&lt;br /&gt;
 n_samples = 10 ** 6&lt;br /&gt;
 for i in range(n_samples):&lt;br /&gt;
     new_x = x + random.uniform(-delta, delta)&lt;br /&gt;
     delta_u = u(new_x) - u(x)&lt;br /&gt;
     if random.random() &amp;lt; math.exp(-delta_u): x = new_x&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 &lt;br /&gt;
 plt.title('Metropolis algorithm, anharmonic oscillator' )&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True,label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(-1000,1000):&lt;br /&gt;
     x = i / 400.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(math.exp(- u(x)) / 1.93525)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Tartero, G., Krauth, W. Concepts in Monte Carlo sampling, Am. J. Phys. 92, 65–77 (2024) [https://arxiv.org/pdf/2309.03136 arXiv:2309.03136]&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 22:25:02 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Metropolis_X2X4.py</comments>		</item>
		<item>
			<title>LiftedTASEPCompact.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/LiftedTASEPCompact.py</link>
			<description>&lt;p&gt;Summary: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
My Lecture 3 is concerned with the Symmetric Simple Exclusion Process (SSEP), and its liftings, the TASEP (totally asymmetric simple exclusion process) and the lifted TASEP, treated here. All these dynamical systems carry the word &amp;quot;Process&amp;quot; in their descriptions. This is because they are usually described in continuous time. We rather use a formulation in discrete time, where at each time step, a single move is attempted. In fact, each move consists in the choice of a random particle and the choice of a random direction. &lt;br /&gt;
&lt;br /&gt;
Here, we are thus concerned with the lifted TASEP. With periodic boundary conditions, we may separate the forward-and-backward moving TASEP, as discussed in Lecture 3, into two independent copies. At each time step, the TASEP samples the random particle to be moved (forward). This, as discussed, is one half of a lifting of the SSEP. &lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 alpha = 0.8&lt;br /&gt;
 exponent = 2.0&lt;br /&gt;
 prefactor = 1.0&lt;br /&gt;
 NPart = 100; NSites = 2 * NPart&lt;br /&gt;
 NIter = int(prefactor * NPart ** exponent)&lt;br /&gt;
 NStrob = NIter // 40&lt;br /&gt;
 Conf = [1] * NPart + [0] * (NSites - NPart)&lt;br /&gt;
 Active = random.randint (0, NSites - 1)&lt;br /&gt;
 while Conf[Active] != 1: Active = random.randint(0, NSites - 1)&lt;br /&gt;
 Text = 'Periodic lifted TASEP, N= ' + str(NPart) + ', L= ' + str(NSites) + ', alpha= ' + str(alpha)&lt;br /&gt;
 print(' ' * (NSites// 2 + 1 - len(Text) // 2) + Text + ' ' * (NSites// 2 + 1 - len(Text) // 2))&lt;br /&gt;
 print('-' * (NSites + 2))&lt;br /&gt;
 for iter in range(NIter):&lt;br /&gt;
     NewActive = (Active + 1) % NSites&lt;br /&gt;
     if Conf[NewActive] == 0:&lt;br /&gt;
         Conf[Active], Conf[NewActive] = 0, 1&lt;br /&gt;
     Active = NewActive&lt;br /&gt;
     if  random.uniform(0.0, 1.0) &amp;lt; alpha:&lt;br /&gt;
         while True:&lt;br /&gt;
             Active = (Active - 1) % NSites&lt;br /&gt;
             if Conf[Active] == 1: break&lt;br /&gt;
     if iter % NStrob == 0:&lt;br /&gt;
         PP = '|'&lt;br /&gt;
         for k in range(NSites):&lt;br /&gt;
             if Conf[k] == 0: PP += ' '&lt;br /&gt;
             elif Active == k: PP += '^'&lt;br /&gt;
             else: PP += 'X'&lt;br /&gt;
         print(PP + '|')&lt;br /&gt;
 print('-' * (NSites + 2))&lt;br /&gt;
 Text = 'Total time = ' + str(prefactor) +  ' *  N ^ ' + str(exponent)&lt;br /&gt;
 print(' ' * (NSites// 2 + 1 - len(Text) // 2) + Text + ' ' * (NSites// 2 + 1 - len(Text) // 2))&lt;br /&gt;
&lt;br /&gt;
==Output==&lt;br /&gt;
&lt;br /&gt;
Here is output of the above Python program for the lifted TASEP with, for simplicity, N=32, L=64 and only 20 configurations over the length of the simulation. The caret ^ indicates the active particle in the sample space that is lifted with respect to the SSEP. &lt;br /&gt;
&lt;br /&gt;
===For alpha = 0.8 ===&lt;br /&gt;
&lt;br /&gt;
           Periodic lifted TASEP, N= 32, L= 64, alpha= 0.8&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXXX^XXXX                                |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXX X^XXXX                               |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXX^  XXXXXXX XXX                             |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXX X XXXXXXXX^ XXXX                             |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXX X  XXXXXXX^XXXXX                             |&lt;br /&gt;
 |XXXXXXXX^ XXXXX XXXX X XXXXXXXXXXXX X                           |&lt;br /&gt;
 |XXXXX XXXXXXX XX^XXX X XXXXXXXXXXXX X                           |&lt;br /&gt;
 |XXXXX  ^XXXXXX XX XXXXXXXXXXXXXXXXX X                           |&lt;br /&gt;
 |XXXXX  X X^XXXXXX XXXXXXXXXXXXXXXXX X                           |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX^XXXXXXXXXXXXXX X                           |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXXXXXXXXXXXXXX^XX X                           |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX XXXXXX^XXXXXXXXX                           |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX XXXXXXXXXXX^ X XXX                         |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX X ^XX XXXXXXXXXXXX                         |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX   XXXXXXXX^XXXXXXX                         |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX   XXXXXXXXX XX^XXXX                        |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX   XXXXXXXXX XXX XXX^                       |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX   XXXXXXX ^ XXXX X XXX                     |&lt;br /&gt;
 |XXXXX  X XXXXXX XXXX   XXXXXX^   X XXXX XXXX                    |&lt;br /&gt;
 |XXXXX  X  X XXXXXX^XX   XXXXXXX  X XXXX XXXX                    |&lt;br /&gt;
 |XXXXX  X  X  ^XXXXX XXX XXXXXXX  X XXXX XXXX                    |&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
                     Total time = 1.0 *  N ^ 2.0&lt;br /&gt;
&lt;br /&gt;
Clearly, N^2 steps are not sufficient to relax to equilibrium (as the right part of the box remains empty.&lt;br /&gt;
&lt;br /&gt;
===For alpha = 0.5 ===&lt;br /&gt;
&lt;br /&gt;
           Periodic lifted TASEP, N= 32, L= 64, alpha= 0.5&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
 |XXXXXX^XXXXXXXXXXXXXXXXXXXXXXXXX                                |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX^                                |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXX XXXXX  X X^  XX                          |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXX  X^ XXX X    X  XX X                     |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXX  X X  X  X  ^ XXXX X                     |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXX  ^  X  XX XX XX  XXXXX X                     |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXX     XX   XXXXX^  XXXXXX X                     |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXX     XX   XXXXX  XXXX ^  X  XX                 |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXX     XX   XXXXX  XX     X XX X^ X              |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXX     XX   XXXXX  XX         ^ XX X  X  X       |&lt;br /&gt;
 |XXXXXXXXXXXX X^X   X  X X X X  XXXX   X X     XXX X  X  X       |&lt;br /&gt;
 |XXXXXXXXXX X    XXX XX X ^X X  XXXX   X X     XXX X  X  X       |&lt;br /&gt;
 |XXXXXXXXXX X    XXX XX X    XX XX XX^ X X     XXX X  X  X       |&lt;br /&gt;
 |XXXXXXXXXX X    XXX XX X    XX     X^X  XX  XXXXX X  X  X       |&lt;br /&gt;
 |XXXXXXXXXX X    XXX XX X    XX     XX  XX  XX ^  XX  X XX X     |&lt;br /&gt;
 |XXXXXXXXXX X     X  XX  ^  XX X X   XX  X X  XXX XX  X XX X     |&lt;br /&gt;
 |XXXXXXXX X     X ^X     X XXXXX X   XX  X X  XXX XX  X XX X     |&lt;br /&gt;
 | XXXX  XX   XX  XXXX    X XXXXX X   XX  X X  XXX XX  X XX  ^    |&lt;br /&gt;
 | XXXX  XX   XX  XXXX    X XXXXX X   XX  X X      X^  X XXXXX X  |&lt;br /&gt;
 | XXXX  XX   XX  XX X     XX^X X XXX  X   X X  X   XX X XXXXX X  |&lt;br /&gt;
 | XXXX  XX   XX  XX X     XX X  X  XXX X X ^   XX  XX X XXXXX X  |&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
                     Total time = 1.0 *  N ^ 2.0&lt;br /&gt;
&lt;br /&gt;
At the critical value alpha = 0.5 (in general alpha = N / L), N^2 steps appear to be sufficient to relax the system to equilibrium, and we conjecture the mixing time of the lifted TASEP to equal N^2.&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
Further analysis of the lifted TASEP is performed in Massoulié et al (2025). In that paper, we consider an ensemble average of the above picture, plotting the expected density as a function of position and of time obtained from thousands of simulations. Also &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Essler F. H. L, Krauth W., Lifted TASEP: a Bethe ansatz integrable paradigm for non-reversible Markov chains,  [https://doi.org/10.1103/PhysRevX.14.041035 Phys. Rev. X 14, 041035 (2024)].&lt;br /&gt;
* Kapfer S. C.  and Krauth W., Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium, [https://doi.org/10.1103/PhysRevLett.119.240603 Phys. Rev. Lett. 119, 240603 (2017)].&lt;br /&gt;
* Massoulié B., Erignoux C., Toninelli C. and Krauth W., Velocity trapping in the lifted TASEP and the true self-avoiding random walk [https://doi.org/10.1103/mqdr-x95j Phys. Rev. Lett. 135, 127102 (2025)]&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 14:39:55 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:LiftedTASEPCompact.py</comments>		</item>
		<item>
			<title>TASEPCompact.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/TASEPCompact.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
My Lecture 3 is concerned with the [[SSEPCompact.py|Symmetric Simple Exclusion Process (SSEP)]] and its liftings, the TASEP (totally asymmetric simple exclusion process) (''treated here'') and the [[LiftedTASEPCompact.py|lifted TASEP]]. All these dynamical systems carry the word &amp;quot;Process&amp;quot; in their descriptions. This is because they are usually described in continuous time. We rather use a formulation in discrete time, where at each time step, a single move is attempted. In fact, each move consists in the choice of a random particle and the choice of a random direction. &lt;br /&gt;
&lt;br /&gt;
Here, as mentioned, we are concerned with the TASEP. With periodic boundary conditions, we may separate the forward-and-backward moving TASEP, as discussed in Lecture 3, into two independent copies. At each time step, the TASEP samples the random particle to be moved (forward). This, as discussed, is one half of a lifting of the SSEP. &lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 exponent = 2.0&lt;br /&gt;
 alpha = 0.5&lt;br /&gt;
 prefactor = 1.0&lt;br /&gt;
 NPart = 32; NSites = 2 * NPart&lt;br /&gt;
 NIter = int(prefactor * NPart ** exponent)&lt;br /&gt;
 NStrob = NIter // 20&lt;br /&gt;
 Conf = [1] * NPart + [0] * (NSites - NPart)&lt;br /&gt;
 Text = 'Periodic TASEP, N= ' + str(NPart) + ', L= ' + str(NSites)&lt;br /&gt;
 print(' ' * (NSites// 2 + 1 - len(Text) // 2) + Text + ' ' * (NSites// 2 + 1 - len(Text) // 2))&lt;br /&gt;
 print('-' * (NSites + 2))&lt;br /&gt;
 for iter in range(NIter):&lt;br /&gt;
     Active = random.randint (0, NSites - 1)&lt;br /&gt;
     while Conf[Active] != 1:&lt;br /&gt;
         Active = random.randint(0, NSites - 1)&lt;br /&gt;
     Step = 1&lt;br /&gt;
     NewActive = (Active + Step) % NSites&lt;br /&gt;
     if Conf[NewActive] == 0:&lt;br /&gt;
        Conf[Active], Conf[NewActive] = 0, 1&lt;br /&gt;
     if iter % NStrob == 0:&lt;br /&gt;
         PP = str()&lt;br /&gt;
         for k in range(NSites):&lt;br /&gt;
             if Conf[k] == 0: PP += ' '&lt;br /&gt;
             else: PP += 'X'&lt;br /&gt;
         print('|' + PP + '|')&lt;br /&gt;
 print('-' * (NSites + 2))&lt;br /&gt;
 Text = 'Total time = ' + str(prefactor) +  ' *  N ^ ' + str(exponent)&lt;br /&gt;
 print(' ' * (NSites// 2 + 1 - len(Text) // 2) + Text + ' ' * (NSites// 2 + 1 - len(Text) // 2))&lt;br /&gt;
&lt;br /&gt;
==Output==&lt;br /&gt;
. &lt;br /&gt;
Here is output of the above Python program for the TASEP with, for simplicity, N=32, L=64. Over the entire life-time of the simulations (here N^2 single moves), only 20 configurations are shown, one roughly every 50 steps. &lt;br /&gt;
&lt;br /&gt;
                    Periodic TASEP, N= 32, L= 64&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                                |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXXXX XXX X                              |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXX XXXX XX  X                            |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXX XXXXX X X X                            |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXX XXXXXX X  XX X                           |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXX XXXXX X  XXX XX                           |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXX XXX XX  XXXX X X                          |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXX XXXXX  X XX XX  XXX                         |&lt;br /&gt;
 |XXXXXXXXXXXXXXX XXXXXXXXX  X XX X  XXXX                         |&lt;br /&gt;
 |XXXXXXXXXXXXXX XXXXXXXX XX  XX   XX XXX  X                      |&lt;br /&gt;
 |XXXXXXXXXXXXXX XXXXXXXX   XXXX    XX X X XX                     |&lt;br /&gt;
 |XXXXXXXX XXXXXXXXXXXXX   X XXXX   XX  XX XX                     |&lt;br /&gt;
 |XXXX XXXXXXXXXXXXXXXXX    XX XX X X  XX X X X                   |&lt;br /&gt;
 |XX XXXXXXXXXXXXXXXXXX X   X XXX   X XX X XX   X                 |&lt;br /&gt;
 | XXXXXXXXXXXXXXXXXX X XX   XXX X   XX  XXX   X    X             |&lt;br /&gt;
 | XXXXXXXXXXXXXXX X XX XX X X X X  XX X XX  X X      X           |&lt;br /&gt;
 | XXXXXXXXXXXXXX XX  X XXX X XX  X  XX XXX   X   X   X           |&lt;br /&gt;
 | XXXXXXXXXXX XX XXX  XX XX XX X X   XXXXX    X   X   X          |&lt;br /&gt;
 | XXXXXXXXX XXX XXXX  XX  XXXX   X  XXX  X X  XX    X  X         |&lt;br /&gt;
 | XXXXXXXX XXXX XXXX  XX  XXX X    XX XX  XX   X   X X    X      |&lt;br /&gt;
 | XXXXXXX XXXX XX XXX XX   XX X  X  XX X XXX   X   X  X   X      |&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
                     Total time = 1.0 *  N ^ 2.0&lt;br /&gt;
&lt;br /&gt;
With all its limitations, the program illustrates that the N^2 steps are not sufficient to move the compact initial state and to approach the equilibrium.&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
* The mixing behavior  of the TASEP t_mix \sim N^(5/2) has been computed by Baik &amp;amp; Liu (2016) (see references). This is in our units, where one move takes place per unit time.&lt;br /&gt;
* The relaxation time of the TASEP is likewise t_rel \sim N^(5/2). Again this is in the MCMC units of one move per time step.&lt;br /&gt;
* The description of the TASEP as a lifting of the SSEP is from Kapfer et al. (2017) (see references).&lt;br /&gt;
* An example transition matrix for the TASEP with open boundary conditions (hard walls) can be found in Essler &amp;amp; Krauth (2024).&lt;br /&gt;
* The TASEP can be solved by Bethe ansatz, and many of its properties are known.&lt;br /&gt;
* As discussed at length in Lecture 3, the transition matrix of the TASEP is doubly stochastic so that, in equilibrium, every allowed configuration is equally likely.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Baik, J., and Z. Liu, Z. TASEP on a Ring in Sub-relaxation Time Scale. J. Stat. Phys., 165(6): 1051–1085, 2016.&lt;br /&gt;
* Dhar D. An exactly solved model for interfacial growth. Phase Transitions, 9(1):51, 1987&lt;br /&gt;
* Essler F. H. L., Krauth W., Lifted TASEP: a Bethe ansatz integrable paradigm for non-reversible Markov chains,  [https://doi.org/10.1103/PhysRevX.14.041035 Phys. Rev. X 14, 041035 (2024)].&lt;br /&gt;
* Kapfer S. C.  and Krauth W., Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium, [https://doi.org/10.1103/PhysRevLett.119.240603 Phys. Rev. Lett. 119, 240603 (2017)].&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 14:03:51 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:TASEPCompact.py</comments>		</item>
		<item>
			<title>SSEPCompact.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/SSEPCompact.py</link>
			<description>&lt;p&gt;Summary: /* Python program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
My Lecture 3 is concerned with the ''Symmetric Simple Exclusion Process'' (SSEP), treated here, and its liftings, the TASEP (totally asymmetric simple exclusion process) and the ''lifted'' TASEP. All these dynamical systems carry the word &amp;quot;''Process''&amp;quot; in their descriptions because they are usually described in continuous time. Here, we rather use a formulation in discrete time, where at each time step t=0,1,2,..., a single move is attempted. In fact, each move consists in the choice of a random particle and the choice of a random direction. The SSEP is a local diffusive Markov chain, and it has very slow dynamics: it takes N^3 log N steps to get it from a compact initial state into equilibrium.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import math&lt;br /&gt;
 import random&lt;br /&gt;
 exponent = 3.0&lt;br /&gt;
 alpha = 0.5&lt;br /&gt;
 prefactor = 1.0&lt;br /&gt;
 NPart = 100; NSites = 2 * NPart&lt;br /&gt;
 NIter = int(prefactor * NPart ** exponent * math.log(NPart))&lt;br /&gt;
 NStrob = NIter // 400&lt;br /&gt;
 Conf = [1] * NPart + [0] * (NSites - NPart)&lt;br /&gt;
 Active = random.randint (0, NSites - 1)&lt;br /&gt;
 while Conf[Active] != 1:&lt;br /&gt;
     Active = random.randint(0, NSites - 1)&lt;br /&gt;
 Text = 'Periodic SSEP, N= ' + str(NPart) + ', L= ' + str(NSites)&lt;br /&gt;
 print(' ' * (NSites// 2 + 1 - len(Text) // 2) + Text + ' ' * (NSites// 2 + 1 - len(Text) // 2))&lt;br /&gt;
 print('-' * (NSites + 2))&lt;br /&gt;
 for iter in range(NIter):&lt;br /&gt;
     Active = random.randint (0, NSites - 1)&lt;br /&gt;
     while Conf[Active] != 1:&lt;br /&gt;
         Active = random.randint(0, NSites - 1)&lt;br /&gt;
     Step = random.choice([-1, 1])&lt;br /&gt;
     NewActive = (Active + Step) % NSites&lt;br /&gt;
     if Conf[NewActive] == 0:&lt;br /&gt;
        Conf[Active], Conf[NewActive] = 0, 1&lt;br /&gt;
     if iter % NStrob == 0:&lt;br /&gt;
         PP = str()&lt;br /&gt;
         for k in range(NSites):&lt;br /&gt;
             if Conf[k] == 0: PP += ' '&lt;br /&gt;
             else: PP += 'X'&lt;br /&gt;
         print('|' + PP + '|')&lt;br /&gt;
 print('-' * (NSites + 2))&lt;br /&gt;
 Text = 'Total time = ' + str(prefactor) +  ' *  N ^ ' + str(exponent) + ' * log N'&lt;br /&gt;
 print(' ' * (NSites// 2 + 1 - len(Text) // 2) + Text + ' ' * (NSites// 2 + 1 - len(Text) // 2))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This example program performs a large number of iterations of the Monte Carlo algorithm for the Symmetric Simple Exclusion Process, and plots 40 lines of output over the entire simulation time.&lt;br /&gt;
&lt;br /&gt;
==Output==&lt;br /&gt;
&lt;br /&gt;
Here is output of the above Python program with, for simplicity, N=32, L=64 and only 20 configurations over the length of the simulation.&lt;br /&gt;
&lt;br /&gt;
The initial configuration is compact. Clearly, the simulation has not run long enough to forget its initial state, and it would be necessary to increase the simulation time from N^2 log N to N^3 log N. In our simplified setting, the logarithm is difficult to see, but better analysis tools readily extract it from the numerical data. &lt;br /&gt;
&lt;br /&gt;
                     Periodic SSEP, N= 32, L= 64&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                                |&lt;br /&gt;
 |XXXXXXXXXXXXXXXXXXXXXXXXXXXXX XX  X                             |&lt;br /&gt;
 | XXXXXXXXXXXXXXXXXXXXXXXXXXXXX  X    X                         X|&lt;br /&gt;
 | X XXXXXXXXXXXXXXXXXXXXXXXXXXX    X    X                     X X|&lt;br /&gt;
 |  X XXXXXXXXXXXXXXXXXXXXXX X XXX X        X                XX  X|&lt;br /&gt;
 |X   XXXXXXXXXXXXXXXXXXXXX XXX X  XX      X               X  X X |&lt;br /&gt;
 |X XXX XXXXXXXXXXXXXXXXXXXX XX   X   X X X               X X     |&lt;br /&gt;
 |  XXXX XXXXXXXXXXXXXXXXXXXX X    X   XX  X           X    X    X|&lt;br /&gt;
 | XXXXXX XXXXXXXXXXXXXXXXXX  X   X  X X  X X         X      X    |&lt;br /&gt;
 |  XXXXXXXX XXXXXXXXXXXXXXX X   X   X   XX X              XX    X|&lt;br /&gt;
 | XXXXXXX XXXXXXXXXXXXX XXXX  X X XX         XX            X X   |&lt;br /&gt;
 | XXXX XXXXXXXXXXXXXX XXXXX X  XXXX        X  X          X X     |&lt;br /&gt;
 |X XX XXXXXXXXXXXX XXXXXXX X XX XX X     X X               XX    |&lt;br /&gt;
 |X  XXXX XXXXXXXXXX XXXX XXXX XXX  X     X X               X   XX|&lt;br /&gt;
 | XXX XXXXXXXXXXX XXXXXXXXX X  X X   X X X X                 X  X|&lt;br /&gt;
 |X XXX XXXXXXXXXXX XXXX XXXXX X   X X   XX X                   XX|&lt;br /&gt;
 | XXXXX XXXXXXXXX X XXXXXXXXX  X  X   XXX     X                XX|&lt;br /&gt;
 |XXXX XXXXXXXXXXXX X XXXXX XX X  XX  X  X  XX                   X|&lt;br /&gt;
 |XXXX XXXXXXXXX XXXXXXXX XX X  XX X  X   XX   X                 X|&lt;br /&gt;
 |XXXXXX XXXXXX XXXXXXXXXXX X  X  X X  XX   X    X              X |&lt;br /&gt;
 |X XXXXXX XX XXXXXXXXXXXXX XX  X  X X  X   X  X              X  X|&lt;br /&gt;
 ------------------------------------------------------------------&lt;br /&gt;
                 Total time = 1.0 *  N ^ 2.0 * log N&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
* The mixing behavior  of the SSEP t_mix \sim N^3 log N has been computed by Lacoin (2016, 2017) (see references).&lt;br /&gt;
* The relaxation time of the SSEP is t_rel \sim N^3 (without the logarithm). It is thus asymptotically smaller than the mixing time, leading to the cutoff phenomenon.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* Essler F. H. L, Krauth W., Lifted TASEP: a Bethe ansatz integrable paradigm for non-reversible Markov chains, [https://doi.org/10.1103/PhysRevX.14.041035  Phys. Rev. X 14, 041035 (2024)]&lt;br /&gt;
* Lacoin H., The cutoff profile for the simple exclusion process on the circle, Ann. Probab. 44, 3399 (2016)&lt;br /&gt;
* Lacoin H., The simple exclusion process on the circle has a diffusive cutoff window, Ann. Inst. H. Poincaré Probab.Statist. 53, 1402 (2017).&lt;br /&gt;
* Kapfer S. C.  and Krauth W., Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium, [https://doi.org/10.1103/PhysRevLett.119.240603 Phys. Rev. Lett. 119, 240603 (2017)].&lt;/div&gt;</description>
			<pubDate>Mon, 10 Jun 2024 12:31:12 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:SSEPCompact.py</comments>		</item>
		<item>
			<title>Diffusion CFTP coupl pos.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Diffusion_CFTP_coupl_pos.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 N = 5&lt;br /&gt;
 pos = []&lt;br /&gt;
 for stat in range(100000):&lt;br /&gt;
    all_arrows = {}&lt;br /&gt;
    time_tot = 0&lt;br /&gt;
    while True:&lt;br /&gt;
       time_tot -= 1&lt;br /&gt;
       arrows = [random.choice([-1, 0, 1]) for i in range(N)]&lt;br /&gt;
       if arrows[0] == -1: arrows[0] = 0&lt;br /&gt;
       if arrows[N - 1] == 1: arrows[N - 1] = 0&lt;br /&gt;
       all_arrows[time_tot] = arrows&lt;br /&gt;
       positions=set(range(0, N))&lt;br /&gt;
       for t in range(time_tot, 0):&lt;br /&gt;
          positions = set([b + all_arrows[t][b] for b in positions])&lt;br /&gt;
          if len(positions) == 1: break&lt;br /&gt;
       if len(positions) == 1: break&lt;br /&gt;
    a = positions.pop()&lt;br /&gt;
    pos.append(a)&lt;br /&gt;
 plt.title('Backward coupling: 1-d with walls: position at coupling time')&lt;br /&gt;
 plt.hist(pos, bins=N, range=(-0.5, N - 0.5), density=True)&lt;br /&gt;
 plt.savefig('backward_coupling_position.png')&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:32:45 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Diffusion_CFTP_coupl_pos.py</comments>		</item>
		<item>
			<title>Diffusion forward.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Diffusion_forward.py</link>
			<description>&lt;p&gt;Summary: /* Context */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
We consider a particle diffusing on a path graph with five sites, but we start, at time t=0, at all possible positions. As in [[Diffusion.py| the naive program Diffusion.py]], arrows go &amp;quot;down&amp;quot;, &amp;quot;straight&amp;quot; and &amp;quot;up&amp;quot; with equal probability. In application of the Metropolis algorithm, an arrow that goes outside the range [0, N - 1] is rejected, that is, replaced by a straight arrow. The stationary distribution of this reversible Markov chain is uniform on the N sites, because the probability to move from site i to site j equals the probability to move from site j to site i.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 N = 5&lt;br /&gt;
 pos = []&lt;br /&gt;
 for stat in range(10000):&lt;br /&gt;
    posit = set(range(N))&lt;br /&gt;
    t = 0&lt;br /&gt;
    while True:&lt;br /&gt;
       t += 1&lt;br /&gt;
       posit = set([min(max(b + random.choice([-1, 0, 1]), 0), N - 1) for b in posit])&lt;br /&gt;
       if len(posit) == 1: break&lt;br /&gt;
    pos.append(posit.pop())&lt;br /&gt;
 plt.title('Forward coupling: 1-d with walls: position of the coupled config.')&lt;br /&gt;
 plt.hist(pos,bins=N,range=(-0.5, N - 0.5), density=True)&lt;br /&gt;
 plt.savefig('ForwardCoupling.png')&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:29:20 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Diffusion_forward.py</comments>		</item>
		<item>
			<title>Top to random TVD.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Top_to_random_TVD.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import math&lt;br /&gt;
 import copy&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 def factorial(n):&lt;br /&gt;
     return 1 if n == 0 else (0 if n == 0 else factorial(n - 1) * n)&lt;br /&gt;
 &lt;br /&gt;
 for N in [3, 4, 5, 6, 7, 8, 9, 10]:&lt;br /&gt;
     print(N, 'N')&lt;br /&gt;
     XValues = []&lt;br /&gt;
     YValues = []&lt;br /&gt;
 #&lt;br /&gt;
 # lifted TASEP: sample a random initial active particle&lt;br /&gt;
 #&lt;br /&gt;
     Pit = {}&lt;br /&gt;
     Conf = [k for k in range(N)]&lt;br /&gt;
     Pit[tuple(Conf)] = 1.0&lt;br /&gt;
     NConfs = factorial(N)&lt;br /&gt;
     weight = 1.0 / NConfs&lt;br /&gt;
     DiameterFlag = True&lt;br /&gt;
     for step in range(40000):&lt;br /&gt;
 #    print(Pit)&lt;br /&gt;
         TVD = (NConfs - len(Pit)) * weight / 2.0&lt;br /&gt;
         for k in Pit:&lt;br /&gt;
             TVD += abs(Pit[k] - weight) / 2.0&lt;br /&gt;
 #        print('============', N, step, len(Pit) / NConfs, TVD, '==============')&lt;br /&gt;
         if len(Pit) == NConfs and DiameterFlag == True:&lt;br /&gt;
             DiameterFlag = False&lt;br /&gt;
             print(N, step, 'N, Diameter / N ** 2')&lt;br /&gt;
         XValues.append(step / N / math.log(N))&lt;br /&gt;
         YValues.append(TVD)&lt;br /&gt;
         if TVD &amp;lt; 0.02: break&lt;br /&gt;
         NewPit = {}&lt;br /&gt;
         for OldConf, OldPi in Pit.items():&lt;br /&gt;
             OldConf = list(OldConf)&lt;br /&gt;
             a = OldConf.pop(0)&lt;br /&gt;
             for k in range(N):&lt;br /&gt;
                 Conf = OldConf[0:k] + [a] + OldConf[k:N-1]&lt;br /&gt;
                 if tuple(Conf) in NewPit.keys():&lt;br /&gt;
                     NewPit[tuple(Conf)] += OldPi / N&lt;br /&gt;
                 else:&lt;br /&gt;
                     NewPit[tuple(Conf)] = OldPi / N&lt;br /&gt;
         Pit = copy.deepcopy(NewPit)&lt;br /&gt;
     plt.plot(XValues, YValues, label = '$N= $' + str(N))&lt;br /&gt;
 plt.title('TVD of Top-to-random shuffle')&lt;br /&gt;
 plt.xlabel('$t /( N \log N)$ (rescaled time)')&lt;br /&gt;
 plt.ylabel('TVD')&lt;br /&gt;
 plt.legend(loc='upper center')&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:27:39 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Top_to_random_TVD.py</comments>		</item>
		<item>
			<title>Top to random simul stop.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Top_to_random_simul_stop.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import random&lt;br /&gt;
 NCards = 3&lt;br /&gt;
 data = {}&lt;br /&gt;
 for iter1 in range(1000000):&lt;br /&gt;
     L= [k for k in range(NCards)]&lt;br /&gt;
     green_card = NCards - 1&lt;br /&gt;
     for iter2 in range(2000000):&lt;br /&gt;
         a = L.pop(0)&lt;br /&gt;
         L.insert(random.randint(0, len(L)), a)&lt;br /&gt;
         if a == green_card:&lt;br /&gt;
             L = tuple(L)&lt;br /&gt;
             data[L] = data.get(L, 0) + 1&lt;br /&gt;
             break&lt;br /&gt;
 for k in data:&lt;br /&gt;
     print(k, data[k])&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:24:03 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Top_to_random_simul_stop.py</comments>		</item>
		<item>
			<title>Sample transformation power.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Sample_transformation_power.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random, math&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 N_trials = 1000000&lt;br /&gt;
 data = []&lt;br /&gt;
 gamma = -0.7&lt;br /&gt;
 for iter in range(N_trials):&lt;br /&gt;
     Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
 #&lt;br /&gt;
 #   This is the sample transformation SMAC eqs (1.28), (1.29)&lt;br /&gt;
 #&lt;br /&gt;
     x = Upsilon ** (1.0 / (gamma + 1))&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 &lt;br /&gt;
 plt.title('power-law distribution (sample transformation)  $\gamma = $ '+ str(gamma))&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True,label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(5, 1000):&lt;br /&gt;
     x = i / 1000.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append((gamma + 1.0) * x ** gamma)&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:22:15 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Sample_transformation_power.py</comments>		</item>
		<item>
			<title>Sample transformation exp.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Sample_transformation_exp.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random, math&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 N_trials = 100000&lt;br /&gt;
 data = []&lt;br /&gt;
 lam = 2.7&lt;br /&gt;
 for iter in range(N_trials):&lt;br /&gt;
     Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
 #&lt;br /&gt;
 #   This is the sample transformation SMAC eqs (1.30), (1.31)&lt;br /&gt;
 #&lt;br /&gt;
     x = -math.log(Upsilon) / lam&lt;br /&gt;
     data.append(x)&lt;br /&gt;
 &lt;br /&gt;
 plt.title('exponential random numbers (sample transformation) $\lambda = $ '+ str(lam))&lt;br /&gt;
 plt.xlabel('$x$')&lt;br /&gt;
 plt.ylabel('$\pi(x)$')&lt;br /&gt;
 plt.hist(data, bins=100, density=True,label='data')&lt;br /&gt;
 XValues = []&lt;br /&gt;
 YValues = []&lt;br /&gt;
 for i in range(1000):&lt;br /&gt;
     x = i / 200.0&lt;br /&gt;
     XValues.append(x)&lt;br /&gt;
     YValues.append(lam * math.exp(-lam * x))&lt;br /&gt;
 plt.plot(XValues, YValues, label='theory')&lt;br /&gt;
 plt.legend(loc='upper right')&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:20:57 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Sample_transformation_exp.py</comments>		</item>
		<item>
			<title>Top to random eigenvalues.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Top_to_random_eigenvalues.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
In this example, I consider the transition matrix for the top-to-random shuffle discussed in Lecture 2. This N! x N! matrix has N distinct eigenvalues, and therefore huge degeneracies.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import numpy as np&lt;br /&gt;
 import itertools&lt;br /&gt;
 import scipy.linalg as la&lt;br /&gt;
 &lt;br /&gt;
 def factorial(n):&lt;br /&gt;
     return 1 if n == 0 else (0 if n == 0 else factorial(n - 1) * n)&lt;br /&gt;
 &lt;br /&gt;
 for N in [2, 3, 4, 5, 6, 7]:&lt;br /&gt;
     FacN = factorial(N)&lt;br /&gt;
     ConfCopy = [0] * N&lt;br /&gt;
     print(N, 'N', FacN)&lt;br /&gt;
 #&lt;br /&gt;
 #   Setup of transition matrix&lt;br /&gt;
 #&lt;br /&gt;
     P   = np.zeros((FacN, FacN))&lt;br /&gt;
     Conf = [k for k in range(N)]&lt;br /&gt;
     ConfList = list(itertools.permutations(Conf))&lt;br /&gt;
     for Conf in ConfList:&lt;br /&gt;
         i = ConfList.index(tuple(Conf))&lt;br /&gt;
         ConfCopy[:] = Conf&lt;br /&gt;
         a = ConfCopy.pop(0)&lt;br /&gt;
         for k in range(N):&lt;br /&gt;
             TargetConf = ConfCopy[0:k] + [a] + ConfCopy[k:N - 1]&lt;br /&gt;
             j = ConfList.index(tuple(TargetConf))&lt;br /&gt;
             P[i][j] = 1.0 / float(N)&lt;br /&gt;
     eigvals, eigvecsl, eigvecsr = la.eig(P, left=True)&lt;br /&gt;
     eigvals.sort()&lt;br /&gt;
     stats = [0] * (N+1)&lt;br /&gt;
     for a in eigvals:&lt;br /&gt;
         index =  int(N * a.real + 0.5)&lt;br /&gt;
         stats[index] += 1&lt;br /&gt;
     print(stats)&lt;br /&gt;
==Output==&lt;br /&gt;
&lt;br /&gt;
2 N 2 [1, 0, 1]  &lt;br /&gt;
    &lt;br /&gt;
3 N 6 [2, 3, 0, 1]&lt;br /&gt;
&lt;br /&gt;
4 N 24 [9, 8, 6, 0, 1]&lt;br /&gt;
&lt;br /&gt;
5 N 120 [44, 45, 20, 10, 0, 1]&lt;br /&gt;
&lt;br /&gt;
6 N 720 [265, 264, 135, 40, 15, 0, 1]&lt;br /&gt;
&lt;br /&gt;
For N=6, the output means that &lt;br /&gt;
&lt;br /&gt;
the eigenvalue 1 is 1 times degenerate &lt;br /&gt;
&lt;br /&gt;
the eigenvalue 1 - 1/N is 0 times degenerate &lt;br /&gt;
&lt;br /&gt;
the eigenvalue 1 - 2/N is 15 times degenerate &lt;br /&gt;
&lt;br /&gt;
the eigenvalue 1 - 3/N is 40 times degenerate, etc&lt;br /&gt;
&lt;br /&gt;
The degeneracy of the second eigenvalue is thus n(n-1)/2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NB: The number of cards in the deck is N, and the diameter of the transition matrix is N-1. This minimal number of steps to connect any two configurations is also computed by the program). The number of configurations is N!, in other words, it is huge. Markov-chain Monte Carlo algorithms work, in spite of the huge sample space, precisely because the diameters of their transition matrix remains small. For N=10, the 3628800 states of the transition matrix can all be connected in 9 steps.&lt;br /&gt;
&lt;br /&gt;
==Further information==&lt;br /&gt;
&lt;br /&gt;
The program [[Top_to_random_simul.py | Top_to_random_simul.py]] contains a Monte Carlo simulation of the top-to-random shuffle. It will output a perfectly shuffled deck in the limit of infinite shuffling times. &lt;br /&gt;
&lt;br /&gt;
[[Top_to_random_simul_stop.py |Top_to_random_simul_stop.py]] performs a simulation with a stopping criterion: It will output a perfectly shuffled deck.&lt;br /&gt;
&lt;br /&gt;
[[Top_to_random_TVD.py | Top_to_random_TVD.py]] computes the total variation distance, starting from a single-permutation starting configuration at time t=0.&lt;br /&gt;
&lt;br /&gt;
Finally, [[Top_to_random_TVD_mix_rel.py | Top_to_random_TVD_Mix_Rel.py]] again computes the total variation distance, starting from a single-permutation starting configuration at time t=0, but it performs an analysis in terms of mixing and relaxation timesfor the small system sizes that are available to us.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
Aldous D. and Diaconis P., Shuffling Cards and Stopping Times, The American Mathematical Monthly 5, 333 (1986)&lt;br /&gt;
&lt;br /&gt;
Diaconis, P., Fill, J., and Pitman, J., Analysis of top to random shuffles. Combinatorics, Probability, and Computing (1992) 1, 135-155.&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 13:17:47 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Top_to_random_eigenvalues.py</comments>		</item>
		<item>
			<title>Bernoulli poisson.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Bernoulli_poisson.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random, math&lt;br /&gt;
 p = 0.7&lt;br /&gt;
 lam = - math.log(1.0 - p)&lt;br /&gt;
 N_trial = 1000000&lt;br /&gt;
 count_a = 0&lt;br /&gt;
 for iter in range(N_trial):&lt;br /&gt;
     t = -math.log(random.uniform(0., 1.0)) / lam&lt;br /&gt;
     if t &amp;lt; 1.0: count_a += 1&lt;br /&gt;
 print(count_a / N_trial)&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 12:51:35 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Bernoulli_poisson.py</comments>		</item>
		<item>
			<title>Direct surface 2d.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Direct_surface_2d.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import random, math&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 N_trials = 1000000&lt;br /&gt;
 data = []&lt;br /&gt;
 twopi = 2.0 * math.pi&lt;br /&gt;
 for iter in range(N_trials):&lt;br /&gt;
     x = random.gauss(0.0, 1.0)&lt;br /&gt;
     y = random.gauss(0.0, 1.0)&lt;br /&gt;
     r = math.sqrt(x **2 + y ** 2);&lt;br /&gt;
     x = x / r; y = y / r # uniform sample on the surface of unit sphere&lt;br /&gt;
     phi = (math.atan2(y, x) + twopi) % twopi&lt;br /&gt;
     data.append(phi)&lt;br /&gt;
 plt.title('direct_surface_2d.py (histogram of angles)')&lt;br /&gt;
 plt.xlabel('angle')&lt;br /&gt;
 plt.ylabel('histogram')&lt;br /&gt;
 plt.hist(data, bins=100, density=True)&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 12:48:29 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Direct_surface_2d.py</comments>		</item>
		<item>
			<title>Naive surface 2d.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Naive_surface_2d.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random, math&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 N_trials = 1000000&lt;br /&gt;
 data = []&lt;br /&gt;
 twopi = 2.0 * math.pi&lt;br /&gt;
 for iter in range(N_trials):&lt;br /&gt;
     x = random.uniform(-1.0, 1.0)&lt;br /&gt;
     y = random.uniform(-1.0, 1.0)&lt;br /&gt;
     r = math.sqrt(x ** 2 + y ** 2)&lt;br /&gt;
     if r &amp;lt; 1.0:&lt;br /&gt;
         x = x / r; y = y / r # uniform sample on the surface of unit sphere&lt;br /&gt;
         phi = (math.atan2(y, x) + twopi) % twopi&lt;br /&gt;
         data.append(phi)&lt;br /&gt;
 plt.title('naive_surface_2d.py (histogram of angles)')&lt;br /&gt;
 plt.xlabel('angle')&lt;br /&gt;
 plt.ylabel('histogram')&lt;br /&gt;
 plt.hist(data, bins=100, density=True)&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 12:47:35 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Naive_surface_2d.py</comments>		</item>
		<item>
			<title>Buggy surface 2d.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Buggy_surface_2d.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 import random, math&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 N_trials = 1000000&lt;br /&gt;
 data = []&lt;br /&gt;
 twopi = 2.0 * math.pi&lt;br /&gt;
 for iter in range(N_trials):&lt;br /&gt;
     x = random.uniform(-1.0, 1.0)&lt;br /&gt;
     y = random.uniform(-1.0, 1.0)&lt;br /&gt;
     r = math.sqrt(x **2 + y ** 2);&lt;br /&gt;
     x = x / r; y = y / r # non-uniform sample on the surface of unit sphere&lt;br /&gt;
     phi = (math.atan2(y, x) + twopi) % twopi&lt;br /&gt;
     data.append(phi)&lt;br /&gt;
 plt.title('buggy_surface_2d.py (histogram of angles)')&lt;br /&gt;
 plt.xlabel('angle')&lt;br /&gt;
 plt.ylabel('histogram')&lt;br /&gt;
 plt.hist(data, bins=100, density=True)&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 12:46:04 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Buggy_surface_2d.py</comments>		</item>
		<item>
			<title>Bernoulli two pebbles patch.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Bernoulli_two_pebbles_patch.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 p = 0.7&lt;br /&gt;
 phat = 0.4&lt;br /&gt;
 count_a = 0&lt;br /&gt;
 N_trial = 1000000&lt;br /&gt;
 for iter in range(N_trial):&lt;br /&gt;
     Upsilon_1 = random.uniform(0.0, 1.0)&lt;br /&gt;
     if Upsilon_1 &amp;lt; phat: count_a += 1&lt;br /&gt;
     else:&lt;br /&gt;
         Upsilon_2 = random.uniform(0.0, 1.0)&lt;br /&gt;
         if Upsilon_2 &amp;gt; (1 - p) / (1 - phat): count_a += 1&lt;br /&gt;
 print(count_a / N_trial)&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 05:33:42 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Bernoulli_two_pebbles_patch.py</comments>		</item>
		<item>
			<title>Bernoulli two pebbles.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Bernoulli_two_pebbles.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 p = 0.7&lt;br /&gt;
 phat = 0.4&lt;br /&gt;
 count_a = 0&lt;br /&gt;
 N_trial = 1000000&lt;br /&gt;
 for iter in range(N_trial):&lt;br /&gt;
     Upsilon_1 = random.uniform(0.0, 1.0)&lt;br /&gt;
     if Upsilon_1 &amp;lt; phat: count_a += 1&lt;br /&gt;
     else:&lt;br /&gt;
         Upsilon_2 = random.uniform(phat, 1.0)&lt;br /&gt;
         if Upsilon_2 &amp;lt; p: count_a += 1&lt;br /&gt;
 print(count_a / N_trial)&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 05:31:21 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Bernoulli_two_pebbles.py</comments>		</item>
		<item>
			<title>Bernoulli.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Bernoulli.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 p = 0.7&lt;br /&gt;
 count_a = 0&lt;br /&gt;
 N_trial = 1000000&lt;br /&gt;
 for iter in range(N_trial):&lt;br /&gt;
     Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
     if Upsilon &amp;lt; p: count_a += 1&lt;br /&gt;
 print(count_a / N_trial)&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 05:29:43 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Bernoulli.py</comments>		</item>
		<item>
			<title>Diffusion.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Diffusion.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
We consider a particle diffusing on a path graph with five sites, always starting at position i = 0, at time t=0. Arrows go &amp;quot;down&amp;quot;, &amp;quot;straight&amp;quot; and &amp;quot;up&amp;quot; with equal probability. In application of the Metropolis algorithm, an arrow that goes outside the range [0, N - 1] is rejected, that is, replaced by a straight arrow. The stationary distribution of this reversible Markov chain is uniform on the N sites, because the probability to move from site i to site j equals the probability to move from site j to site i.&lt;br /&gt;
&lt;br /&gt;
[[Image:One d single.png|left|600px|border|Diffusion on a path graph with 5 sites.]]&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 N = 5&lt;br /&gt;
 data = []&lt;br /&gt;
 tmax = 10&lt;br /&gt;
 for stat in range(100000):&lt;br /&gt;
    pos = 0&lt;br /&gt;
    for t in range(tmax):&lt;br /&gt;
       pos = min(max(pos + random.choice([-1, 0, 1]), 0), N - 1)&lt;br /&gt;
    data.append(pos)&lt;br /&gt;
 plt.title('diffusion starting at $x=0$,  $t = $' + str(tmax))&lt;br /&gt;
 plt.hist(data, bins=N, range=(-0.5, N-0.5), density=True)&lt;br /&gt;
 plt.savefig('diffusion.png')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
NB: The line containing the &amp;quot;pos = min(max ... &amp;quot; statement may appear cryptic, but it simply performs the random walk from the position &amp;quot;pos&amp;quot; in &amp;quot;down&amp;quot;, &amp;quot;straight&amp;quot; and &amp;quot;up&amp;quot; directions, with the necessary cropping at the boundaries that keeps the random walk safely within the range [0, N-1].&lt;br /&gt;
&lt;br /&gt;
==Output==&lt;br /&gt;
&lt;br /&gt;
Here is output of the above Python program. Although the stationary distribution is flat, as discussed above, the distribution pi^t, at finite times t is&lt;br /&gt;
biased towards the initial configuration i(t=0) = 0. The below figure shows &lt;br /&gt;
[[Image:Diffusion.png|left|600px|border|Histogram of output after 10 steps, starting from i=0.]]&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
The errors with respect to the flat distribution are finite at each finite time t.&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
* Rather than to start at the 'bad' initial configuration i(t=0) = 0, one might be tempted to start it in the middle, but this contradicts the fact that, in real applications, one cannot usually choose a &amp;quot;good&amp;quot; initial configuration. &lt;br /&gt;
* One may also want to replace the 'bad' initial configuration by a random initial configuration i(t=0) = choice(0, ... N - 1). But this would mean that we do not have to do Markov-chain sampling at all, because a direct-sampling strategy is available. &lt;br /&gt;
&lt;br /&gt;
All this was discussed in detail in the lectures, and gave rise to lively discussions.&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 05:26:51 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Diffusion.py</comments>		</item>
		<item>
			<title>BegRohu Lectures 2024</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/BegRohu_Lectures_2024</link>
			<description>&lt;p&gt;Summary: /* Context */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
A &amp;quot;few&amp;quot; years ago, as if it was yesterday, I taught at the 1996 Beg Rohu Summer School on an [https://arxiv.org/pdf/cond-mat/9612186 Introduction To Monte Carlo Algorithms]. The lecture notes of this first course were ''found readable and the drawings enjoyable'', so I made them into a 2006 book for Oxford University Press entitled &amp;quot;Statistical Mechanics: Algorithms and Computations&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
The 2024 lectures discuss the foundations of the enormous corpus of works that have constituted the second revolution in Markov chains and Monte Carlo algorithm, namely the understanding of time scales in stochastic dynamics and the concepts of coupling, thinning and, last not least, the systematic construction of non-reversible (that is, non-equilibrium) Markov chains that nevertheless converge to the imposed equilibrium steady state. I hope that what I present here is again ''found readable and enjoyable'', ...&lt;br /&gt;
&lt;br /&gt;
==List of Python program==&lt;br /&gt;
&lt;br /&gt;
===Lecture 1===&lt;br /&gt;
&lt;br /&gt;
[[Bernoulli.py | Bernoulli.py]]&lt;br /&gt;
&lt;br /&gt;
[[Bernoulli_two_pebbles.py | Bernoulli_two_pebbles.py]]&lt;br /&gt;
&lt;br /&gt;
[[Bernoulli_two_pebbles_patch.py | Bernoulli_two_pebbles_patch.py]]&lt;br /&gt;
&lt;br /&gt;
[[Bernoulli_poisson.py | Bernoulli_poisson.py]]&lt;br /&gt;
&lt;br /&gt;
[[Sample_transformation_exp.py | Sample_transformation_exp.py]]&lt;br /&gt;
&lt;br /&gt;
[[Sample_transformation_power.py | Sample_transformation_power.py]]&lt;br /&gt;
&lt;br /&gt;
[[Buggy_surface_2d.py | Buggy_surface_2d.py]]&lt;br /&gt;
&lt;br /&gt;
[[Naive_surface_2d.py | Naive_surface_2d.py]]&lt;br /&gt;
&lt;br /&gt;
[[Direct_surface_2d.py | Direct_surface_2d.py]]&lt;br /&gt;
&lt;br /&gt;
[[Walker.py | Walker.py]]&lt;br /&gt;
&lt;br /&gt;
===Lecture 2===&lt;br /&gt;
&lt;br /&gt;
[[Image:One d cftp.png|left|600px|border|Coupling-from-the-past approach to sampling.]]&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[ Top_to_random_eigenvalues.py | Top_to_random_eigenvalues.py]] x &amp;amp;beta;&lt;br /&gt;
&lt;br /&gt;
[[ Top to random simul.py |  Top to random simul.py]]&lt;br /&gt;
&lt;br /&gt;
[[ Top_to_random_simul_stop.py | Top_to_random_simul_stop.py]]&lt;br /&gt;
&lt;br /&gt;
[[Top_to_random_TVD.py |Top-to-random-TVD.py]]&lt;br /&gt;
&lt;br /&gt;
[[ Top to random TVD mix rel.py | Top to random TVD mix rel.py]]&lt;br /&gt;
&lt;br /&gt;
[[Diffusion.py | Diffusion.py]] x&lt;br /&gt;
&lt;br /&gt;
[[Diffusion_forward.py | Diffusion_forward.py]]&lt;br /&gt;
&lt;br /&gt;
[[Diffusion_CFTP_coupl_pos.py | Diffusion_CFTP_coupl_pos.py]]&lt;br /&gt;
&lt;br /&gt;
[[Diffusion CFTP.py | Diffusion CFTP.py]] x &amp;amp;beta;&lt;br /&gt;
&lt;br /&gt;
===Lecture 3===&lt;br /&gt;
&lt;br /&gt;
[[SSEPCompact.py | SSEPCompact.py]] x&lt;br /&gt;
&lt;br /&gt;
[[SSEPSteady.py | SSEPSteady.py]]&lt;br /&gt;
&lt;br /&gt;
[[TASEPCompact.py | TASEPCompact.py]] x&lt;br /&gt;
&lt;br /&gt;
[[TASEPSteady.py | TASEPSteady.py]]&lt;br /&gt;
&lt;br /&gt;
[[LiftedTASEPCompact.py | LiftedTASEPCompact.py]] x&lt;br /&gt;
&lt;br /&gt;
[[LiftedTASEPSteady.py | LiftedTASEPSteady.py]]&lt;br /&gt;
&lt;br /&gt;
===Lecture 4===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Metropolis_X2X4.py | Metropolis_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Factor_Metropolis_X2X4.py | Factor_Metropolis_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Factor_Metropolis_X2X4_patch.py | Factor_Metropolis_X2X4_patch.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Lifted_Metropolis_X2X4.py | Lifted_Metropolis_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
[[ZigZag_X2X4.py | ZigZag_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Factor_ZigZag_X2X4.py | Factor_ZigZag_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Bounded_Lifted_Metropolis_X2X4.py | Bounded_Lifted_Metropolis_X2X4.py  ]]&lt;br /&gt;
&lt;br /&gt;
[[Bounded_ZigZag_X2X4.py | Bounded_ZigZag_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Bounded_Factor_ZigZag_X2X4.py | Bounded_Factor_ZigZag_X2X4.py ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Lecture 5===&lt;br /&gt;
&lt;br /&gt;
[[Stopping_circle.py | Stopping_circle.py ]]&lt;br /&gt;
&lt;br /&gt;
[[SSEP_coupling.py | SSEP_coupling.py ]]&lt;br /&gt;
&lt;br /&gt;
[[SSEP_coupling_FTP.py | SSEP_coupling_FTP.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Ising_coupling.py | Ising_coupling.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Ising_coupling_FTP.py | Ising_coupling_FTP.py ]]&lt;br /&gt;
&lt;br /&gt;
[[Hard_spheres_coupling.py |Hard_spheres_coupling.py ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;quot;x&amp;quot; means that the program &amp;quot;already&amp;quot; has some explanatory text, even though this text may not yet be complete&lt;br /&gt;
&lt;br /&gt;
&amp;quot;&amp;amp;beta;&amp;quot; means that it has been &amp;amp;beta; tested by students at the 2024 Beg Rohu summer school&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
* Levin, D. A., Peres, Y. &amp;amp; Wilmer, E. L. Markov Chains and Mixing Times (American Mathematical Society, 2008)&lt;br /&gt;
* Krauth, W., Statistical Mechanics: Algorithms and Computations (Oxford University Press, 2006)&lt;br /&gt;
* Wasserman, L., All of Statistics (Springer Verlag, 2004)&lt;br /&gt;
&lt;br /&gt;
See the individual pages of my lectures for a large number of more specific references.&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 05:13:44 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:BegRohu_Lectures_2024</comments>		</item>
		<item>
			<title>Top to random TVD mix rel.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Top_to_random_TVD_mix_rel.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import math&lt;br /&gt;
 import copy&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 def factorial(n):&lt;br /&gt;
     return 1 if n == 0 else (0 if n == 0 else factorial(n - 1) * n)&lt;br /&gt;
 &lt;br /&gt;
 for N in [8]:&lt;br /&gt;
     print(N, 'N')&lt;br /&gt;
     XValues = []&lt;br /&gt;
     YValues = []&lt;br /&gt;
 #&lt;br /&gt;
 #   computing the TVD&lt;br /&gt;
 #&lt;br /&gt;
     Pit = {}&lt;br /&gt;
     Conf = [k for k in range(N)]&lt;br /&gt;
     Pit[tuple(Conf)] = 1.0&lt;br /&gt;
     NConfs = factorial(N)&lt;br /&gt;
     weight = 1.0 / NConfs&lt;br /&gt;
     DiameterFlag = True&lt;br /&gt;
     for step in range(40000):&lt;br /&gt;
         TVD = (NConfs - len(Pit)) * weight / 2.0&lt;br /&gt;
         for k in Pit:&lt;br /&gt;
             TVD += abs(Pit[k] - weight) / 2.0&lt;br /&gt;
 #        print('============', N, step, len(Pit) / NConfs, TVD, '==============')&lt;br /&gt;
         if len(Pit) == NConfs and DiameterFlag == True:&lt;br /&gt;
             DiameterFlag = False&lt;br /&gt;
             print(N, step, 'N, Diameter')&lt;br /&gt;
         XValues.append(step)&lt;br /&gt;
         YValues.append(TVD)&lt;br /&gt;
         if TVD &amp;lt; 0.0000001: break&lt;br /&gt;
         if TVD &amp;lt; 1.0 / 5.4: tmix = step&lt;br /&gt;
         NewPit = {}&lt;br /&gt;
         for OldConf, OldPi in Pit.items():&lt;br /&gt;
             OldConf = list(OldConf)&lt;br /&gt;
             a = OldConf.pop(0)&lt;br /&gt;
             for k in range(N):&lt;br /&gt;
                 Conf = OldConf[0:k] + [a] + OldConf[k:N-1]&lt;br /&gt;
                 if tuple(Conf) in NewPit.keys():&lt;br /&gt;
                     NewPit[tuple(Conf)] += OldPi / N&lt;br /&gt;
                 else:&lt;br /&gt;
                     NewPit[tuple(Conf)] = OldPi / N&lt;br /&gt;
         Pit = copy.deepcopy(NewPit)&lt;br /&gt;
     plt.semilogy(XValues, YValues, label = '$N= $' + str(N))&lt;br /&gt;
     tau_rel = N  / 2.3&lt;br /&gt;
     for i in range(len(XValues)):&lt;br /&gt;
         X = XValues[i]&lt;br /&gt;
         YValues[i] =  (1 - 2.0 / N) ** X&lt;br /&gt;
     plt.semilogy(XValues, YValues, label = 'Rel $N= $' + str(N))&lt;br /&gt;
     for i in range(len(XValues)):&lt;br /&gt;
         X = XValues[i]&lt;br /&gt;
         YValues[i] =  (1.0 / 2.0 ) ** (X / tmix)&lt;br /&gt;
     plt.semilogy(XValues, YValues, label = 'Mix N= ' + str(N))&lt;br /&gt;
 &lt;br /&gt;
 plt.title('TVD of Top-to-random shuffle')&lt;br /&gt;
 plt.xlabel('$t$ (non-rescaled time)')&lt;br /&gt;
 plt.ylabel('TVD')&lt;br /&gt;
 plt.legend(loc='upper center')&lt;br /&gt;
 plt.show()&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 00:08:16 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Top_to_random_TVD_mix_rel.py</comments>		</item>
		<item>
			<title>Top to random simul.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Top_to_random_simul.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 NCards = 3&lt;br /&gt;
 data = {}&lt;br /&gt;
 for iter1 in range(1000000):&lt;br /&gt;
     L= [k for k in range(NCards)]&lt;br /&gt;
     for iter2 in range(10):&lt;br /&gt;
         a = L.pop(0)&lt;br /&gt;
         L.insert(random.randint(0, len(L)), a)&lt;br /&gt;
     L = tuple(L)&lt;br /&gt;
     data[L] = data.get(L, 0) + 1&lt;br /&gt;
 for k in data:&lt;br /&gt;
     print(k, data[k])&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 00:05:53 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Top_to_random_simul.py</comments>		</item>
		<item>
			<title>Walker.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Walker.py</link>
			<description>&lt;p&gt;Summary: /* Python program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
 import random&lt;br /&gt;
 def walker_setup (pi):&lt;br /&gt;
     &amp;quot;&amp;quot;&amp;quot;compute the lookup table for Walker's algorithm&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
     N_walker = len(pi)&lt;br /&gt;
     walker_mean = sum(a[0] for a in pi) / float(N_walker)&lt;br /&gt;
     long_s = []&lt;br /&gt;
     short_s = []&lt;br /&gt;
     for p in pi:&lt;br /&gt;
         if p[0] &amp;gt; walker_mean:&lt;br /&gt;
             long_s.append(p[:])&lt;br /&gt;
         else:&lt;br /&gt;
             short_s.append(p[:])&lt;br /&gt;
     walker_table = []&lt;br /&gt;
     for k in range(N_walker - 1):&lt;br /&gt;
         e_plus = long_s.pop()&lt;br /&gt;
         e_minus = short_s.pop()&lt;br /&gt;
         walker_table.append((e_minus[0], e_minus[1], e_plus[1]))&lt;br /&gt;
         e_plus[0] = e_plus[0] - (walker_mean - e_minus[0])&lt;br /&gt;
         if e_plus[0] &amp;lt; walker_mean:&lt;br /&gt;
             short_s.append(e_plus)&lt;br /&gt;
         else:&lt;br /&gt;
             long_s.append(e_plus)&lt;br /&gt;
     if long_s != []:&lt;br /&gt;
         walker_table.append((long_s[0][0], long_s[0][1], long_s[0][1]))&lt;br /&gt;
     else:&lt;br /&gt;
         walker_table.append((short_s[0][0], short_s[0][1], short_s[0][1]))&lt;br /&gt;
     return N_walker, walker_mean, walker_table&lt;br /&gt;
 &lt;br /&gt;
 N = 12&lt;br /&gt;
 pi = [random.uniform(0.1, 10.0) for i in range(N)]&lt;br /&gt;
 pisum = sum(a for a in pi)&lt;br /&gt;
 pi = [[pi[k] / pisum, k] for k in range(N)]   # see Comment below&lt;br /&gt;
 &lt;br /&gt;
 N_walker, walker_mean, walker_table = walker_setup (pi)&lt;br /&gt;
 &lt;br /&gt;
 data = [0] * N&lt;br /&gt;
 NSample = 100000 * N&lt;br /&gt;
 for iter in range(NSample):&lt;br /&gt;
     i = random.randint (0, N - 1)&lt;br /&gt;
     Upsilon = random.uniform (0.0, walker_mean)&lt;br /&gt;
     if Upsilon &amp;lt; walker_table[i][0]:&lt;br /&gt;
         sample = walker_table[i][1]&lt;br /&gt;
     else:&lt;br /&gt;
         sample = walker_table[i][2]&lt;br /&gt;
     data[sample] += 1.0 / NSample&lt;br /&gt;
 for i in range(N):&lt;br /&gt;
     print(pi[i], data[i])&lt;br /&gt;
&lt;br /&gt;
Comment: In the line above that contains the comment, the normalized probabilities are entered into the list-of-lists pi. Each element of the list pi is thus a list of two elements, the probability pi(i) and the index i. At the beginning, this is a bit pointless. Later on in the process, some cutting-up and reaarranging takes place, and it becomes convenient to know the 'alias' of each stick, that is the original element i that each stick originally came from.&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 00:04:24 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Walker.py</comments>		</item>
		<item>
			<title>Diffusion CFTP.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Diffusion_CFTP.py</link>
			<description>&lt;p&gt;Summary: /* Python program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Context==&lt;br /&gt;
This page is part of my [[BegRohu_Lectures_2024|2024 Beg Rohu Lectures]] on &amp;quot;The second Markov chain revolution&amp;quot; at the [https://www.ipht.fr/Meetings/BegRohu2024/index.html Summer School] &amp;quot;Concepts and Methods of Statistical Physics&amp;quot; (3 - 15 June 2024).&lt;br /&gt;
&lt;br /&gt;
The present program illustrates the coupling-from-the-past approach to Markov-chain sampling introduced by Propp and Wilson in 1997. This paper is definitely part of the ''second Revolution''. It illustrates how Markov-chain sampling can obtain samples from the stationary distribution, without any corrections whatsoever. As in the other program in the series started by [[Diffusion.py |Diffusion.py]], we consider a particle diffusing on a path graph with five sites, with &amp;quot;down&amp;quot;, &amp;quot;up&amp;quot;, and &amp;quot;straight¨ moves that are cropped at the boundaries (there are no periodic boundary conditions). We consider the formulation of a Markov chain in terms of random maps, but run from time t=-infinity up to time t=0. &lt;br /&gt;
&lt;br /&gt;
[[Image:One d cftp.png|left|600px|border|Coupling-from-the-past approach to sampling.]]&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
At time t=-infinity, as shown, the pebble starts on configuration 1 so that, by time t=0, it has run for an infinite time. Whatever the mixing time and the relaxation time, by t=0, it must sample the stationary distribution (the uniform distribution on the five sites). The below program constructs just as many sets of arrows as needed to conclude where it finishes its infinitely long journey.&lt;br /&gt;
&lt;br /&gt;
==Python program==&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 import matplotlib.pyplot as plt&lt;br /&gt;
 &lt;br /&gt;
 N = 5&lt;br /&gt;
 pos = []&lt;br /&gt;
 for stat in range(100000):&lt;br /&gt;
    all_arrows = {}&lt;br /&gt;
    time_tot = 0&lt;br /&gt;
    while True:&lt;br /&gt;
       time_tot -= 1&lt;br /&gt;
       arrows = [random.choice([-1, 0, 1]) for i in range(N)]&lt;br /&gt;
       if arrows[0] == -1: arrows[0] = 0&lt;br /&gt;
       if arrows[N - 1] == 1: arrows[N - 1] = 0&lt;br /&gt;
       all_arrows[time_tot] = arrows&lt;br /&gt;
       positions = set(range(0, N))&lt;br /&gt;
       for t in range(time_tot, 0):&lt;br /&gt;
          positions = set([b + all_arrows[t][b] for b in positions])&lt;br /&gt;
       if len(positions) == 1: break&lt;br /&gt;
    a = positions.pop()&lt;br /&gt;
    pos.append(a)&lt;br /&gt;
 plt.title('Backward coupling: 1-d with walls: position at t=0')&lt;br /&gt;
 plt.hist(pos, bins=N, range=(-0.5, N - 0.5), density=True)&lt;br /&gt;
 plt.savefig('backward_position_t0.png')&lt;br /&gt;
 plt.show()&lt;br /&gt;
&lt;br /&gt;
Note that &amp;quot;arrows&amp;quot; is a list, and all_arrows a dictionary containing the arrows for each position (0,1,...,N - 1). This dictionary contains exactly the random map of arrows in the above picture, from time time_tot to time -1.&lt;br /&gt;
&lt;br /&gt;
==Output==&lt;br /&gt;
&lt;br /&gt;
Here is output of the above Python program. The histogram is absolutely flat, without any corrections. But this is normal, given that the simulation has run, for each of the realizations of the random map, an infinite number of iterations. &lt;br /&gt;
&lt;br /&gt;
[[Image:Backward position t0.png|left|600px|border|Coupling-from-the-past approach to sampling.]]&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
* Rather than to run the simulation until t=0, one might be tempted to stop it as soon as the configurations have coupled at some time t &amp;lt; 0. This is tested in the following program, but it is wrong for obvious reasons: The goal was to infer the position at t=0 of a simulation that has run since time t=-infinity.&lt;br /&gt;
* The above program is easily adapted to a non-uniform stationary distribution pi(1),...,pi(N-1), and its output is then equivalent to what we would obtain using the direct-sampling algorithms that I discussed in BegRohu Lecture 1, first and foremost [[Walker.py| Walker's algorithm]]. However, Walker's algorithm always requires the stationary probabilities on the sample space to be evaluated and rearranged in some way, whereas the CFTP approach to Markov chains can be often applied for huge sample spaces, way larger than what can be listed. This is itself an astonishing story, that I hope to have time to relate in later lectures.&lt;br /&gt;
* A truly astonishing direct-sampling algorithm [[Stopping circle.py | Stopping_cycle.py]], based on the concept of strong stationary stopping times, is discussed in Lecture 5. However, this algorithm comes with a no-go theorem by Lovász and Winkler, that it is only correct for the cycle graph (the graph in the above example, but with periodic boundary conditions), and for the complete graph.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reference is:&lt;/div&gt;</description>
			<pubDate>Thu, 06 Jun 2024 00:02:29 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Diffusion_CFTP.py</comments>		</item>
		<item>
			<title>Heatbath ising.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Heatbath_ising.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page presents the Python3 program heatbath_ising.py, a Markov-chain algorithm for the Ising model on an L x L square lattice in two dimensions with periodic boundary conditions. This program uses the heatbath algorithm.&lt;br /&gt;
&lt;br /&gt;
__FORCETOC__&lt;br /&gt;
=Description=&lt;br /&gt;
The program is described in my 2024 Oxford Lecture No 8, and also in my book. This version of the program estimates the energy per particle, and the specific heat. &lt;br /&gt;
=Program=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 import random, math&lt;br /&gt;
 &lt;br /&gt;
 L = 6&lt;br /&gt;
 N = L * L&lt;br /&gt;
 nbr = {i : ((i // L) * L + (i + 1) % L, (i + L) % N,&lt;br /&gt;
             (i // L) * L + (i - 1) % L, (i - L) % N) \&lt;br /&gt;
                                     for i in range(N)}&lt;br /&gt;
 nsteps = 10000000&lt;br /&gt;
 beta = 1.0&lt;br /&gt;
 S = [random.choice([-1, 1]) for site in range(N)]&lt;br /&gt;
 E = -0.5 * sum(S[k] * sum(S[nn] for nn in nbr[k]) \&lt;br /&gt;
                                 for k in range(N))&lt;br /&gt;
 E_tot, E2_tot = 0.0, 0.0&lt;br /&gt;
 for step in range(nsteps):&lt;br /&gt;
     k = random.randint(0, N - 1)&lt;br /&gt;
     Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
     h = sum(S[nn] for nn in nbr[k])&lt;br /&gt;
     Sk_old = S[k]&lt;br /&gt;
     S[k] = -1&lt;br /&gt;
     if Upsilon &amp;lt; 1.0 / (1.0 + math.exp(-2.0 * beta * h)):&lt;br /&gt;
         S[k] = 1&lt;br /&gt;
     if S[k] != Sk_old:&lt;br /&gt;
         E -= 2.0 * h * S[k]&lt;br /&gt;
     E_tot += E&lt;br /&gt;
     E2_tot += E ** 2&lt;br /&gt;
 E_av  = E_tot / float(nsteps)&lt;br /&gt;
 E2_av = E2_tot / float(nsteps)&lt;br /&gt;
 c_V = beta ** 2 * (E2_av - E_av ** 2) / float(N)&lt;br /&gt;
 print(E_av / N,c_V)&lt;br /&gt;
&lt;br /&gt;
=Version=&lt;br /&gt;
See history for version information.&lt;br /&gt;
&lt;br /&gt;
[[Category:Python]] [[Category:Oxford_2024]] [[Category:MOOC_SMAC]]&lt;/div&gt;</description>
			<pubDate>Mon, 04 Mar 2024 22:34:29 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Heatbath_ising.py</comments>		</item>
		<item>
			<title>Two cycles.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Two_cycles.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a Python3 program to sample random permutations P of N elements subject to the constraint that the lengths of the cycles in P can only be one or two. The algorithm is based on a recursive-sampling strategy discussed in my book and presented in [[Oxford_Lectures_2025#Lecture_7:_04_March_2025|Lecture 7]] of my [[Oxford_Lectures_2025|2025 Oxford lectures]]. &lt;br /&gt;
&lt;br /&gt;
__FORCETOC__&lt;br /&gt;
=Description=&lt;br /&gt;
&lt;br /&gt;
To sample permutations of N elements which only contain cycles of length 1 or 2, we derived in Lecture 7 a recursion formula for the number Y_N of such permutations (eq. (7.6)), namely&lt;br /&gt;
 Y_N = Y_{N-1} + (N - 1) Y_{N-2}&lt;br /&gt;
The first term on the right-hand site contains the number of permutations such that the last element is itself in a cycle of length 1, while the second term corresponds to permutations of length 2. It thus suffices to first compute the sequence of the Y_0, Y_1, Y_2, .... Y_N, then sample a random number between 0 and Y_N, and then check whether it is smaller than Y_{N-1}. If it is, the last element is in a cycle of length 1, else in one of length 2. We can continue our sampling either with N - 1 particles left in a permutation that we know nothing about, or else with N - 2 particles. All this may sound a bit mysterious, but it certainly works out OK!&lt;br /&gt;
&lt;br /&gt;
The reason we discuss this algorithm is that a very similar algorithm is used for sampling cycles in the Bose-gas problem of ideal particles (for example, in a harmonic trap in three dimensions). &lt;br /&gt;
&lt;br /&gt;
=Program=&lt;br /&gt;
&lt;br /&gt;
 import random&lt;br /&gt;
 from sympy.combinatorics import Permutation&lt;br /&gt;
 Y = {-1: 0, 0:1, 1:1}&lt;br /&gt;
 N = 4&lt;br /&gt;
 Stats = {}&lt;br /&gt;
 for k in range(2, N + 1):&lt;br /&gt;
     Y[k] = Y[k - 1] + (k - 1) * Y[k - 2]&lt;br /&gt;
 for iter in range(100000):&lt;br /&gt;
     Q = list(range(N))&lt;br /&gt;
     random.shuffle(Q)&lt;br /&gt;
     M = N&lt;br /&gt;
     P = []&lt;br /&gt;
     while M &amp;gt; 0:&lt;br /&gt;
         if random.uniform(0.0, Y[M]) &amp;lt; Y[M - 1]:&lt;br /&gt;
             P.append([Q[M - 1]])&lt;br /&gt;
             M -= 1&lt;br /&gt;
         else:&lt;br /&gt;
             P.append([Q[M - 1] , Q[M - 2]])&lt;br /&gt;
             M -= 2&lt;br /&gt;
     P = tuple(Permutation(P).array_form)&lt;br /&gt;
     Stats[P] = Stats.get(P, 0) + 1&lt;br /&gt;
 print(Stats)&lt;br /&gt;
&lt;br /&gt;
=Output=&lt;br /&gt;
&lt;br /&gt;
The output is a dictionary of permutations (in array form) together with the number of times they were sampled. Here we see, that all the permutations are (roughly) equally frequent, that is, were drawn with equal probabilities. We may check that all these permutations have cycles of at most two.&lt;br /&gt;
&lt;br /&gt;
 {(0, 3, 2, 1): 9801, (0, 2, 1, 3): 10281, (0, 1, 2, 3): 10145, (2, 3, 0, 1): 9940, (3, 1, 2, 0): 9869, &lt;br /&gt;
  (0, 1, 3, 2): 9950, (2, 1, 0, 3): 9830, (3, 2, 1, 0): 10154, (1, 0, 3, 2): 10016, (1, 0, 2, 3): 10014}&lt;br /&gt;
&lt;br /&gt;
=Version=&lt;br /&gt;
See history for version information.&lt;br /&gt;
For more information on this program, see [[Oxford_Lectures_2025#Lecture_7:_04_March_2025|Lecture 7]] of my [[Oxford_Lectures_2025|2025 Oxford lectures]].  &lt;br /&gt;
[[Category:Python]] [[Category:Oxford2025]]&lt;/div&gt;</description>
			<pubDate>Tue, 27 Feb 2024 02:32:50 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Two_cycles.py</comments>		</item>
		<item>
			<title>Oxford Lectures 2024</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Oxford_Lectures_2024</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;My [https://www.physics.ox.ac.uk/events/algorithms-and-computations-theoretical-physics-set-lectures 2024 Public Lectures at the University of Oxford (UK)] run from 16 January 2024 through 5 March 2024.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/c/c5/WK_Lecture1_Oxford2024.pdf Here are the  notes for the first lecture.  ]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/5/5a/WK_Lecture2_Oxford2024.pdf Here are the notes for the second and third lectures. ]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/1/19/WK_Lecture4_Oxford2024.pdf Here are the preliminary notes for the fourth lecture.]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/6/63/WK_Lecture5_Oxford2024.pdf Here are the preliminary notes for the fifth lecture.]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/9/98/WK_Lecture6_Oxford2024.pdf Here are the notes for the sixth lecture.]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/b/b4/WK_Lecture7_Oxford2024.pdf Here are the notes for the seventh lecture.]&lt;br /&gt;
&lt;br /&gt;
[[Two_cycles.py| Two_cycles.py]] &lt;br /&gt;
&lt;br /&gt;
[[direct_N_bosons.py| Direct_bosons.py]]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/f/fe/WK_Lecture8_Oxford2024.pdf Here are the notes for the eighth lecture.]&lt;br /&gt;
&lt;br /&gt;
[[Enumerate_ising.py|Enumerate_ising.py]]&lt;br /&gt;
&lt;br /&gt;
[[Markov_ising.py|Markov_ising.py]]&lt;br /&gt;
&lt;br /&gt;
[[Heatbath_ising.py|Heatbath_ising.py]]&lt;br /&gt;
&lt;br /&gt;
[[Cluster_ising.py| Cluster_ising.py]]&lt;/div&gt;</description>
			<pubDate>Mon, 26 Feb 2024 16:59:02 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Oxford_Lectures_2024</comments>		</item>
		<item>
			<title>Direct N bosons.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Direct_N_bosons.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a Python3 program for the direct sampling of N non-interacting (that is, ideal) bosons in a three-dimensional harmonic trap. The program is discussed in my book and presented in [[Oxford_Lectures_2025#Lecture_7:_04_March_2025|Lecture 7]] of my 2025 Oxford lectures. &lt;br /&gt;
&lt;br /&gt;
__FORCETOC__&lt;br /&gt;
&lt;br /&gt;
=Description=&lt;br /&gt;
&lt;br /&gt;
=Program=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 import pylab, math, random&lt;br /&gt;
 def z(beta, k):&lt;br /&gt;
    sum = 1.0 / (1.0 - math.exp(-k * beta)) ** 3&lt;br /&gt;
    return sum&lt;br /&gt;
 def canonic_recursion(beta,N):&lt;br /&gt;
    Z = [1.0]&lt;br /&gt;
    for M in range(1, N + 1):&lt;br /&gt;
       Z.append(sum(Z[k] * z(beta, M - k) for k in range(M)) / M)&lt;br /&gt;
    return Z&lt;br /&gt;
 def pi_list_make(Z, M):&lt;br /&gt;
    pi_list = [0] + [z(beta, k) * Z[M - k] / Z[M] / M for k in range(1, M + 1)]&lt;br /&gt;
    pi_sum = [0]&lt;br /&gt;
    for k in range(1, M + 1):&lt;br /&gt;
       pi_sum.append(pi_sum[k - 1] + pi_list[k])&lt;br /&gt;
    return pi_sum&lt;br /&gt;
 def tower_sample(data, Upsilon): #fully naive tower sampling&lt;br /&gt;
    for k in range(len(data)):&lt;br /&gt;
       if Upsilon &amp;lt; data[k]: break&lt;br /&gt;
    return k&lt;br /&gt;
 def levy_harmonic_path(Del_tau, N):&lt;br /&gt;
    beta = N * Del_tau&lt;br /&gt;
    xN = random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(beta / 2.0)))&lt;br /&gt;
    x = [xN]&lt;br /&gt;
    for k in range(1, N):&lt;br /&gt;
       Upsilon_1 = 1.0 / math.tanh(Del_tau) + 1.0 / math.tanh((N - k) * Del_tau)&lt;br /&gt;
       Upsilon_2 = x[k - 1]/ math.sinh(Del_tau) + xN / math.sinh((N - k) * Del_tau)&lt;br /&gt;
       x_mean = Upsilon_2 / Upsilon_1&lt;br /&gt;
       sigma = 1.0 / math.sqrt(Upsilon_1)&lt;br /&gt;
       x.append(random.gauss(x_mean, sigma))&lt;br /&gt;
    return x&lt;br /&gt;
 &lt;br /&gt;
 N = 100 # this naive program may overflow for N &amp;gt; 100&lt;br /&gt;
 T_star = 0.1&lt;br /&gt;
 T = T_star * N ** (1.0 / 3.0)&lt;br /&gt;
 beta = 1.0 / T&lt;br /&gt;
 Z = canonic_recursion(beta, N)&lt;br /&gt;
 print(Z)&lt;br /&gt;
 M = N&lt;br /&gt;
 x_config = []&lt;br /&gt;
 y_config = []&lt;br /&gt;
 while M &amp;gt; 0:&lt;br /&gt;
    pi_sum = pi_list_make(Z, M)&lt;br /&gt;
    Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
    k = tower_sample(pi_sum, Upsilon)&lt;br /&gt;
    x_config += levy_harmonic_path(beta, k)&lt;br /&gt;
    y_config += levy_harmonic_path(beta, k)&lt;br /&gt;
    M -= k&lt;br /&gt;
 pylab.figure(1)&lt;br /&gt;
 pylab.axis('scaled')&lt;br /&gt;
 pylab.xlim(-5.0, 5.0)&lt;br /&gt;
 pylab.ylim(-5.0, 5.0)&lt;br /&gt;
 pylab.plot(x_config, y_config,'ro')&lt;br /&gt;
 pylab.xlabel('$x$')&lt;br /&gt;
 pylab.ylabel('$y$')&lt;br /&gt;
 pylab.title('3d bosons in trap (2d projection):'+' $N$= '+str(N)+' T* =' + str(T_star))&lt;br /&gt;
 pylab.show()&lt;br /&gt;
&lt;br /&gt;
=Output=&lt;br /&gt;
&lt;br /&gt;
[[Image:Boson trap fast.gif|100px|right|frame|Direct-sampling algorithm for ideal bosons in a trap. The output shown here was produced with a more elaborate version of the program than the one shown on this page]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Further reading=&lt;br /&gt;
&lt;br /&gt;
The Path-integral Monte Carlo algorithm &lt;br /&gt;
&lt;br /&gt;
=Version=&lt;br /&gt;
See history for version information.&lt;br /&gt;
For more information on this program, see Lecture 7 of my 2025 Oxford lectures. &lt;br /&gt;
[[Category:Python]] [[Category:Oxford2025]]&lt;/div&gt;</description>
			<pubDate>Mon, 26 Feb 2024 16:49:52 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Direct_N_bosons.py</comments>		</item>
		<item>
			<title>Coupling ising.py</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Coupling_ising.py</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page presents the program coupling_ising.py, a heat-bath algorithm for the Ising model on an LxL square lattice in two dimensions, run for two configurations at a time. The algorithm illustrates the coupling phenomenon.&lt;br /&gt;
&lt;br /&gt;
__FORCETOC__&lt;br /&gt;
=Description=&lt;br /&gt;
&lt;br /&gt;
=Program (in Python3)=&lt;br /&gt;
 import random, math&lt;br /&gt;
 &lt;br /&gt;
 L = 7&lt;br /&gt;
 N = L * L&lt;br /&gt;
 nbr = {i : ((i // L) * L + (i + 1) % L, (i + L) % N,&lt;br /&gt;
             (i // L) * L + (i - 1) % L, (i - L) % N) \&lt;br /&gt;
                                     for i in range(N)}&lt;br /&gt;
 NIter = 100&lt;br /&gt;
 for TT in range(20, 40):&lt;br /&gt;
     T = TT / 10&lt;br /&gt;
     beta = 1.0 / T&lt;br /&gt;
     MeanCoupling = 0&lt;br /&gt;
     for iter in range(NIter):&lt;br /&gt;
         S1 = [-1] * N&lt;br /&gt;
         S2 = [1] * N&lt;br /&gt;
         step = 0&lt;br /&gt;
         while True:&lt;br /&gt;
             step += 1&lt;br /&gt;
             k = random.randint(0, N - 1)&lt;br /&gt;
             Upsilon = random.uniform(0.0, 1.0)&lt;br /&gt;
             h1 = sum(S1[nn] for nn in nbr[k])&lt;br /&gt;
             S1[k] = -1&lt;br /&gt;
             if Upsilon &amp;lt; 1.0 / (1.0 + math.exp(-2.0 * beta * h1)): S1[k] = 1&lt;br /&gt;
             h2 = sum(S2[nn] for nn in nbr[k])&lt;br /&gt;
             S2[k] = -1&lt;br /&gt;
             if Upsilon &amp;lt; 1.0 / (1.0 + math.exp(-2.0 * beta * h2)): S2[k] = 1&lt;br /&gt;
             if S1 == S2:&lt;br /&gt;
                 MeanCoupling += step&lt;br /&gt;
                 break&lt;br /&gt;
     print(T, MeanCoupling / NIter)&lt;br /&gt;
=Output=&lt;br /&gt;
A slightly modified graphics version of this program produces the following output:&lt;br /&gt;
&lt;br /&gt;
[[Image:IsingCoupling.png|left|50px]]&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot; /&amp;gt;&lt;br /&gt;
=Version=&lt;br /&gt;
See history for version information.&lt;br /&gt;
&lt;br /&gt;
[[Category:Python]] [[Category:Honnef_2015]] [[Category:MOOC_SMAC]]&lt;/div&gt;</description>
			<pubDate>Fri, 17 Feb 2023 17:17:32 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Coupling_ising.py</comments>		</item>
		<item>
			<title>Advanced topics MCMC 2023</title>
			<link>http://www.lps.ens.fr/~krauth/index.php/Advanced_topics_MCMC_2023</link>
			<description>&lt;p&gt;Summary: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the homepage of my 2023 lecture course on Advanced topics in Markov-chain Monte Carlo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/f/f6/CM_2023_1.1.pdf Here is the CM1.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/2/23/CM_2023_1.2.pdf Here is the CM1.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/4/45/TD01_ICFP_ADV_MCMC.pdf Here is the TD1]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/4/4d/CM_2023_2.1.pdf Here is the CM2.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/1/11/CM_2023_2.2.pdf Here is the CM2.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/5/51/TD02_2023_ICFP_ADV_MCMC.pdf Here is the TD2]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/e/ef/CM_2023_3.1.pdf Here is the CM3.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/a/a4/CM_2023_3.2.pdf Here is the CM3.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/2/25/TD03_2023_ICFP_ADV_MCMC.pdf Here is the TD3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/3/3e/CM_2023_4.1.pdf Here is the CM4.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/1/1f/CM_2023_4.2.pdf Here is the CM4.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/1/11/TD04_2023_ICFP_ADV_MCMC.pdf Here is the TD4]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/d/df/CM_2023_5.1.pdf Here is the CM5.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/8/87/CM_2023_5.2.pdf Here is the CM5.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/9/92/CM_2023_5.3.pdf Here is the CM5.3]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/2/28/TD05_2023_ICFP_ADV_MCMC.pdf Here is the TD5]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/8/82/ControleMock_2023.pdf Here is the Mock Exam]&lt;br /&gt;
&lt;br /&gt;
[[Coupling ising.py| here is the program Coupling_ising.py]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/e/e3/CM_2023_6.1.pdf Here is the CM6.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/2/2c/CM_2023_6.2.pdf Here is the CM6.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/e/e6/TD06_2023_ICFP_ADV_MCMC.pdf Here is the TD6]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/7/72/CM_2023_8.1.pdf Here is the CM8.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/2/29/CM_2023_8.2.pdf Here is the CM8.2]&lt;br /&gt;
&lt;br /&gt;
[http://www.lps.ens.fr/%7Ekrauth/images/a/ab/TD08_2023_ADV_MCMC.pdf Here is the TD8]&lt;/div&gt;</description>
			<pubDate>Wed, 11 Jan 2023 09:27:33 GMT</pubDate>			<dc:creator>Werner</dc:creator>			<comments>http://www.lps.ens.fr/~krauth/index.php/Talk:Advanced_topics_MCMC_2023</comments>		</item>
	</channel>
</rss>