http://www.lps.ens.fr/~krauth/index.php?title=Special:Newpages&feed=atomWerner KRAUTH - New pages [en]2024-03-29T06:40:25ZFrom Werner KRAUTHMediaWiki 1.6.12http://www.lps.ens.fr/~krauth/index.php/Heatbath_ising.pyHeatbath ising.py2024-03-04T22:34:29Z<p>Summary: </p>
<hr />
<div>This page presents the Python3 program heatbath_ising.py, a Markov-chain algorithm for the Ising model on an L x L square lattice in two dimensions with periodic boundary conditions. This program uses the heatbath algorithm.<br />
<br />
__FORCETOC__<br />
=Description=<br />
The program is described in my 2024 Oxford Lecture No 8, and also in my book. This version of the program estimates the energy per particle, and the specific heat. <br />
=Program=<br />
<br />
<br />
import random, math<br />
<br />
L = 6<br />
N = L * L<br />
nbr = {i : ((i // L) * L + (i + 1) % L, (i + L) % N,<br />
(i // L) * L + (i - 1) % L, (i - L) % N) \<br />
for i in range(N)}<br />
nsteps = 10000000<br />
beta = 1.0<br />
S = [random.choice([-1, 1]) for site in range(N)]<br />
E = -0.5 * sum(S[k] * sum(S[nn] for nn in nbr[k]) \<br />
for k in range(N))<br />
E_tot, E2_tot = 0.0, 0.0<br />
for step in range(nsteps):<br />
k = random.randint(0, N - 1)<br />
Upsilon = random.uniform(0.0, 1.0)<br />
h = sum(S[nn] for nn in nbr[k])<br />
Sk_old = S[k]<br />
S[k] = -1<br />
if Upsilon < 1.0 / (1.0 + math.exp(-2.0 * beta * h)):<br />
S[k] = 1<br />
if S[k] != Sk_old:<br />
E -= 2.0 * h * S[k]<br />
E_tot += E<br />
E2_tot += E ** 2<br />
E_av = E_tot / float(nsteps)<br />
E2_av = E2_tot / float(nsteps)<br />
c_V = beta ** 2 * (E2_av - E_av ** 2) / float(N)<br />
print(E_av / N,c_V)<br />
<br />
=Version=<br />
See history for version information.<br />
<br />
[[Category:Python]] [[Category:Oxford_2024]] [[Category:MOOC_SMAC]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Two_cycles.pyTwo cycles.py2024-02-27T02:32:50Z<p>Summary: </p>
<hr />
<div>This is a Python3 program to sample random permutations P of N elements subject to the constraint that the lengths of the cycles in P can only be one or two. The algorithm is based on a recursive-sampling strategy discussed in my book and presented in my 2024 Oxford lectures. <br />
<br />
import random<br />
from sympy.combinatorics import Permutation<br />
Y = {-1: 0, 0:1, 1:1}<br />
N = 4<br />
Stats = {}<br />
for k in range(2, N + 1):<br />
Y[k] = Y[k - 1] + (k - 1) * Y[k - 2]<br />
for iter in range(100000):<br />
Q = list(range(N))<br />
random.shuffle(Q)<br />
M = N<br />
P = []<br />
while M > 0:<br />
if random.uniform(0.0, Y[M]) < Y[M - 1]:<br />
P.append([Q[M - 1]])<br />
M -= 1<br />
else:<br />
P.append([Q[M - 1] , Q[M - 2]])<br />
M -= 2<br />
P = tuple(Permutation(P).array_form)<br />
if P in Stats:<br />
Stats[P] += 1<br />
else:<br />
Stats[P] = 1<br />
print(Stats)</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Oxford_Lectures_2024Oxford Lectures 20242024-02-26T16:59:02Z<p>Summary: </p>
<hr />
<div>My [https://www.physics.ox.ac.uk/events/algorithms-and-computations-theoretical-physics-set-lectures 2024 Public Lectures at the University of Oxford (UK)] run from 16 January 2024 through 5 March 2024.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/c/c5/WK_Lecture1_Oxford2024.pdf Here are the notes for the first lecture. ]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/5/5a/WK_Lecture2_Oxford2024.pdf Here are the notes for the second and third lectures. ]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/1/19/WK_Lecture4_Oxford2024.pdf Here are the preliminary notes for the fourth lecture.]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/6/63/WK_Lecture5_Oxford2024.pdf Here are the preliminary notes for the fifth lecture.]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/9/98/WK_Lecture6_Oxford2024.pdf Here are the notes for the sixth lecture.]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/b/b4/WK_Lecture7_Oxford2024.pdf Here are the notes for the seventh lecture.]<br />
<br />
[[Two_cycles.py| Two_cycles.py]] <br />
<br />
[[direct_N_bosons.py| Direct_bosons.py]]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/f/fe/WK_Lecture8_Oxford2024.pdf Here are the notes for the eighth lecture.]<br />
<br />
[[Enumerate_ising.py|Enumerate_ising.py]]<br />
<br />
[[Markov_ising.py|Markov_ising.py]]<br />
<br />
[[Heatbath_ising.py|Heatbath_ising.py]]<br />
<br />
[[Cluster_ising.py| Cluster_ising.py]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Direct_N_bosons.pyDirect N bosons.py2024-02-26T16:49:52Z<p>Summary: </p>
<hr />
<div>This is a Python3 program for the direct sampling of N non-interacting bosons in a three-dimensional harmonic trap. <br />
<br />
<br />
[[Image:Boson trap fast.gif|100px|right|frame|Direct-sampling algorithm for ideal bosons in a trap. The output shown here was produced with a more elaborate version of the program than the one shown on this page]]<br />
<br />
<br clear="all" /><br />
import pylab, math, random<br />
def z(beta, k):<br />
sum = 1.0 / (1.0 - math.exp(-k * beta)) ** 3<br />
return sum<br />
def canonic_recursion(beta,N):<br />
Z = [1.0]<br />
for M in range(1, N + 1):<br />
Z.append(sum(Z[k] * z(beta, M - k) for k in range(M)) / M)<br />
return Z<br />
def pi_list_make(Z, M):<br />
pi_list = [0] + [z(beta, k) * Z[M - k] / Z[M] / M for k in range(1, M + 1)]<br />
pi_sum = [0]<br />
for k in range(1, M + 1):<br />
pi_sum.append(pi_sum[k - 1] + pi_list[k])<br />
return pi_sum<br />
def tower_sample(data, Upsilon): #fully naive tower sampling<br />
for k in range(len(data)):<br />
if Upsilon < data[k]: break<br />
return k<br />
def levy_harmonic_path(Del_tau, N):<br />
beta = N * Del_tau<br />
xN = random.gauss(0.0, 1.0 / math.sqrt(2.0 * math.tanh(beta / 2.0)))<br />
x = [xN]<br />
for k in range(1, N):<br />
Upsilon_1 = 1.0 / math.tanh(Del_tau) + 1.0 / math.tanh((N - k) * Del_tau)<br />
Upsilon_2 = x[k - 1]/ math.sinh(Del_tau) + xN / math.sinh((N - k) * Del_tau)<br />
x_mean = Upsilon_2 / Upsilon_1<br />
sigma = 1.0 / math.sqrt(Upsilon_1)<br />
x.append(random.gauss(x_mean, sigma))<br />
return x<br />
<br />
N = 100 # this naive program may overflow for N > 100<br />
T_star = 0.1<br />
T = T_star * N ** (1.0 / 3.0)<br />
beta = 1.0 / T<br />
Z = canonic_recursion(beta, N)<br />
print(Z)<br />
M = N<br />
x_config = []<br />
y_config = []<br />
while M > 0:<br />
pi_sum = pi_list_make(Z, M)<br />
Upsilon = random.uniform(0.0, 1.0)<br />
k = tower_sample(pi_sum, Upsilon)<br />
x_config += levy_harmonic_path(beta, k)<br />
y_config += levy_harmonic_path(beta, k)<br />
M -= k<br />
pylab.figure(1)<br />
pylab.axis('scaled')<br />
pylab.xlim(-5.0, 5.0)<br />
pylab.ylim(-5.0, 5.0)<br />
pylab.plot(x_config, y_config,'ro')<br />
pylab.xlabel('$x$')<br />
pylab.ylabel('$y$')<br />
pylab.title('3d bosons in trap (2d projection):'+' $N$= '+str(N)+' T* =' + str(T_star))<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Coupling_ising.pyCoupling ising.py2023-02-17T17:17:32Z<p>Summary: </p>
<hr />
<div>This page presents the program coupling_ising.py, a heat-bath algorithm for the Ising model on an LxL square lattice in two dimensions, run for two configurations at a time. The algorithm illustrates the coupling phenomenon.<br />
<br />
__FORCETOC__<br />
=Description=<br />
<br />
=Program (in Python3)=<br />
import random, math<br />
<br />
L = 7<br />
N = L * L<br />
nbr = {i : ((i // L) * L + (i + 1) % L, (i + L) % N,<br />
(i // L) * L + (i - 1) % L, (i - L) % N) \<br />
for i in range(N)}<br />
NIter = 100<br />
for TT in range(20, 40):<br />
T = TT / 10<br />
beta = 1.0 / T<br />
MeanCoupling = 0<br />
for iter in range(NIter):<br />
S1 = [-1] * N<br />
S2 = [1] * N<br />
step = 0<br />
while True:<br />
step += 1<br />
k = random.randint(0, N - 1)<br />
Upsilon = random.uniform(0.0, 1.0)<br />
h1 = sum(S1[nn] for nn in nbr[k])<br />
S1[k] = -1<br />
if Upsilon < 1.0 / (1.0 + math.exp(-2.0 * beta * h1)): S1[k] = 1<br />
h2 = sum(S2[nn] for nn in nbr[k])<br />
S2[k] = -1<br />
if Upsilon < 1.0 / (1.0 + math.exp(-2.0 * beta * h2)): S2[k] = 1<br />
if S1 == S2:<br />
MeanCoupling += step<br />
break<br />
print(T, MeanCoupling / NIter)<br />
=Output=<br />
A slightly modified graphics version of this program produces the following output:<br />
<br />
[[Image:IsingCoupling.png|left|50px]]<br />
<br clear="all" /><br />
=Version=<br />
See history for version information.<br />
<br />
[[Category:Python]] [[Category:Honnef_2015]] [[Category:MOOC_SMAC]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Advanced_topics_MCMC_2023Advanced topics MCMC 20232023-01-11T09:27:33Z<p>Summary: </p>
<hr />
<div>This is the homepage of my 2023 lecture course on Advanced topics in Markov-chain Monte Carlo<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/f/f6/CM_2023_1.1.pdf Here is the CM1.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/23/CM_2023_1.2.pdf Here is the CM1.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/4/45/TD01_ICFP_ADV_MCMC.pdf Here is the TD1]<br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/4/4d/CM_2023_2.1.pdf Here is the CM2.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/1/11/CM_2023_2.2.pdf Here is the CM2.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/5/51/TD02_2023_ICFP_ADV_MCMC.pdf Here is the TD2]<br />
<br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/e/ef/CM_2023_3.1.pdf Here is the CM3.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/a/a4/CM_2023_3.2.pdf Here is the CM3.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/25/TD03_2023_ICFP_ADV_MCMC.pdf Here is the TD3]<br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/3/3e/CM_2023_4.1.pdf Here is the CM4.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/1/1f/CM_2023_4.2.pdf Here is the CM4.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/1/11/TD04_2023_ICFP_ADV_MCMC.pdf Here is the TD4]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/d/df/CM_2023_5.1.pdf Here is the CM5.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/8/87/CM_2023_5.2.pdf Here is the CM5.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/9/92/CM_2023_5.3.pdf Here is the CM5.3]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/28/TD05_2023_ICFP_ADV_MCMC.pdf Here is the TD5]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/8/82/ControleMock_2023.pdf Here is the Mock Exam]<br />
<br />
[[Coupling ising.py| here is the program Coupling_ising.py]]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/e/e3/CM_2023_6.1.pdf Here is the CM6.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/2c/CM_2023_6.2.pdf Here is the CM6.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/e/e6/TD06_2023_ICFP_ADV_MCMC.pdf Here is the TD6]<br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/7/72/CM_2023_8.1.pdf Here is the CM8.1] of my 2023 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/29/CM_2023_8.2.pdf Here is the CM8.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/a/ab/TD08_2023_ADV_MCMC.pdf Here is the TD8]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/TVDTemperingRev.pyTVDTemperingRev.py2022-09-08T10:23:57Z<p>Summary: </p>
<hr />
<div> #<br />
# Simulated tempering for the V-shaped stationary distribution --- Metropolis<br />
# algorithm<br />
#<br />
import random<br />
import pylab<br />
import numpy as np<br />
for n in [10, 20, 40, 80, 160, 320]:<br />
ReplicaChange = 0.1<br />
const = 4.0 / n ** 2<br />
PiStat = {}<br />
Table = []<br />
<br />
for x in range(1, n + 1):<br />
Table.append((x, 0))<br />
Table.append((x, 1))<br />
#<br />
# factor of 1/2 because the total must be normalized<br />
#<br />
PiStat[(x, 0)] = 1.0 / float(n) / 2.0<br />
PiStat[(x, 1)] = const * abs( (n + 1) / 2 - x) / 2.0<br />
PiStat[(0, 0)] = 0.0<br />
PiStat[(0, 1)] = 0.0<br />
PiStat[(n + 1, 0)] = 0.0<br />
PiStat[(n + 1, 1)] = 0.0<br />
PTrans = np.eye(2 * n)<br />
Pi = np.zeros([2 * n])<br />
for x in range(1, n + 1):<br />
for Rep in [0, 1]:<br />
i = Table.index((x, Rep))<br />
Pi[i] = PiStat[(x, Rep)]<br />
for Dir in [-1, 1]:<br />
if PiStat[(x + Dir, Rep)] > 0.0:<br />
j = Table.index((x + Dir, Rep))<br />
PTrans[i, j] = min(1.0, PiStat[(x + Dir, Rep)] / PiStat[(x, Rep)]) / 2.0<br />
PTrans[i, i] -= PTrans[i, j]<br />
<br />
PReplica = np.zeros((2 * n,2 * n))<br />
for x in range(1, n + 1):<br />
i = Table.index((x,0))<br />
j = Table.index((x,1))<br />
PReplica[i, j] = ReplicaChange * min(1.0, PiStat[(x, 1)] / PiStat[(x, 0)])<br />
PReplica[i, i] = 1.0 - PReplica[i, j]<br />
PReplica[j, i] = ReplicaChange * min(1.0, PiStat[(x, 0)] / PiStat[(x, 1)])<br />
PReplica[j, j] = 1.0 - PReplica[j, i]<br />
P = PTrans @ PReplica<br />
Pit = np.zeros([2 * n])<br />
Pit[0] = 1.0<br />
xvalues = []<br />
yvalues = []<br />
iter = 0<br />
while True:<br />
iter += 1<br />
Pit = Pit @ P<br />
TVD = sum(np.absolute(Pi - Pit) / 2.0)<br />
xvalues.append(iter / float(n ** 2))<br />
yvalues.append(TVD)<br />
if TVD < 0.1: break<br />
pylab.plot(xvalues,yvalues, label='$n =$ '+str(n))<br />
pylab.legend(loc='upper right')<br />
pylab.xlabel("$t/ n^2$ (rescaled time) ")<br />
pylab.ylabel("TVD")<br />
pylab.title("TVD rev tempering on the path graph of $n$ sites")<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ThermoIntegrationThermoIntegration2022-09-08T10:20:01Z<p>Summary: [[ThermoIntegration]] moved to [[ThermoIntegration.py]]: Homogenize naming conventions</p>
<hr />
<div> import random<br />
n = 100<br />
NIter = 1000000<br />
AlphaSim = [0, 0.25, 0.5, 0.75, 1.0]<br />
Pi = {}<br />
for Alpha in AlphaSim:<br />
Pi[(0, Alpha)] = 0.0<br />
Pi[(n + 1, Alpha)] = 0.0<br />
for x in range(1, n + 1):<br />
Pi[(x, Alpha)] = (abs( (n + 1) / 2.0 - x)) ** Alpha<br />
x = 1<br />
ZProd = n<br />
for AlphaIndex in range(len(AlphaSim) - 1):<br />
Alpha = AlphaSim[AlphaIndex]<br />
AlphaPrime = AlphaSim[AlphaIndex + 1]<br />
ZRatio = 0<br />
for iter in range(NIter):<br />
ZRatio += Pi[(x, AlphaPrime)] / Pi[(x, Alpha)]<br />
Sigma = random.choice([1, -1])<br />
if random.uniform(0.0, 1.0) < Pi[(x + Sigma, Alpha)] / Pi[(x, Alpha)]:<br />
x = x + Sigma<br />
ZRatio /= NIter<br />
print(ZRatio)<br />
ZProd *= ZRatio<br />
print(1.0 / ZProd, 4.0 / n ** 2)</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/CouplingFromThePast.pyCouplingFromThePast.py2022-09-08T10:15:53Z<p>Summary: </p>
<hr />
<div> import random, pylab<br />
N = 5<br />
pos = []<br />
for statistic in range(10000):<br />
all_arrows = {}<br />
time_tot = 0<br />
while True:<br />
time_tot -= 1<br />
old_pos = set(range(0, N))<br />
arrows = [random.randint(-1, 1) for i in range(N)]<br />
if arrows[0] == -1: arrows[0] = 0<br />
if arrows[N - 1] == 1: arrows[N - 1] = 0<br />
all_arrows[time_tot] = arrows<br />
positions = set(range(N))<br />
for t in range(time_tot, 0):<br />
positions = set([b + all_arrows[t][b] for b in positions])<br />
if len(positions) == 1: break<br />
a=positions.pop()<br />
pos.append(a)<br />
pylab.title('Backward coupling: 1-d with walls: position at t=0')<br />
pylab.hist(pos, bins=N, range=(-0.5, N - 0.5), normed=True)<br />
pylab.savefig('backward_position_t0.png')<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/CardShuffle.pyCardShuffle.py2022-09-08T10:12:06Z<p>Summary: </p>
<hr />
<div> import random,pylab<br />
n = 3<br />
data = []<br />
HistoData = {}<br />
for iter1 in range(100000):<br />
L = list(range(n))<br />
green_card = n - 1<br />
for iter2 in range(200000):<br />
a = L.pop(0)<br />
L.insert(random.randint(0, n - 1), a)<br />
if a == green_card:<br />
LL = tuple(L)<br />
if LL in HistoData: HistoData[LL] += 1<br />
else: HistoData[LL] = 1<br />
data.append(iter2)<br />
break<br />
print(HistoData)<br />
pylab.hist(data,bins=20,normed=True)<br />
pylab.savefig('Histo_shuffle.png')<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/TVDMetroLift.pyTVDMetroLift.py2022-09-08T10:10:45Z<p>Summary: </p>
<hr />
<div> #<br />
# TVD for the Lifted Metropolis algorithm on the path graph.<br />
#<br />
import random<br />
import pylab<br />
import numpy as np<br />
model = 'Flat'<br />
model = 'VShape'<br />
for n in [10, 20, 40, 80, 160, 320]:<br />
const = 4.0 / n ** 2<br />
PiStat = {}<br />
Table = []<br />
for x in range(1, n + 1):<br />
Table.append((x, -1))<br />
Table.append((x, 1))<br />
if model == 'Flat':<br />
PiStat[(x, -1)] = 1.0 / float(n) / 2.0<br />
PiStat[(x, +1)] = 1.0 / float(n) / 2.0<br />
elif model == 'VShape':<br />
PiStat[(x, -1)] = const * abs( (n + 1) / 2 - x) / 2.0<br />
PiStat[(x, 1)] = const * abs( (n + 1) / 2 - x) / 2.0<br />
PiStat[(0, -1)] = 0.0<br />
PiStat[(0, 1)] = 0.0<br />
PiStat[(n + 1, -1)] = 0.0<br />
PiStat[(n + 1, 1)] = 0.0<br />
PTrans = np.zeros((2 * n, 2 * n))<br />
Pi = np.zeros([2 * n])<br />
for x in range(1, n + 1):<br />
for Sigma in [-1, 1]:<br />
i = Table.index((x, Sigma))<br />
Pi[i] = PiStat[(x, Sigma)]<br />
k = Table.index((x, -Sigma))<br />
if PiStat[(x + Sigma, Sigma)] > 0.0:<br />
j = Table.index((x + Sigma, Sigma))<br />
PTrans[i, j] = min(1.0, PiStat[(x + Sigma, Sigma)] / PiStat[(x, Sigma)])<br />
PTrans[i, k] = 1.0 - PTrans[i, j]<br />
else:<br />
PTrans[i, k] = 1.0<br />
PResampling = np.zeros((2 * n, 2 * n))<br />
for x in range(1, n + 1):<br />
for Sigma in [-1, 1]:<br />
i = Table.index((x, Sigma))<br />
j = Table.index((x, -Sigma))<br />
PResampling[i, j] = 1.0 / n<br />
PResampling[i, i] = 1.0 - PResampling[i,j]<br />
<br />
P = PTrans @ PResampling<br />
Pit = np.zeros([2 * n])<br />
Pit[0] = 1.0<br />
xvalues = []<br />
yvalues = []<br />
iter = 0<br />
while True:<br />
iter += 1<br />
Pit = np.array(Pit)<br />
Pit = Pit@P<br />
TVD = sum(np.absolute(Pi - Pit) / 2.0)<br />
xvalues.append(iter / float(n ** 2))<br />
yvalues.append(TVD)<br />
if TVD < 0.1: break<br />
pylab.plot(xvalues,yvalues, label='$n =$ '+str(n))<br />
pylab.legend(loc='upper right')<br />
pylab.xlabel("$t/ n$ (rescaled time) ")<br />
pylab.ylabel("TVD")<br />
pylab.title("TVD for the lifted Metropolis algorithm on the path graph of $n$ sites")<br />
pylab.show()<br />
~</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/TVDMetroPath.pyTVDMetroPath.py2022-09-08T10:09:39Z<p>Summary: </p>
<hr />
<div> #<br />
# TVD for the Metropolis algorithm on the path graph.<br />
#<br />
import random<br />
import pylab<br />
import numpy as np<br />
model = 'Flat'<br />
model = 'VShape'<br />
for n in [10, 20, 40, 80, 160, 320]:<br />
const = 4.0 / n ** 2<br />
PiStat = {}<br />
Table = []<br />
for x in range(1, n + 1):<br />
Table.append(x)<br />
#<br />
# flat stationary probability distribution.<br />
#<br />
if model == 'Flat':<br />
PiStat[x] = 1.0 / float(n)<br />
elif model == 'VShape':<br />
PiStat[x] = const * abs( (n + 1) / 2 - x)<br />
PiStat[0] = 0.0<br />
PiStat[n + 1] = 0.0<br />
PTrans = np.eye(n)<br />
Pi = np.zeros([n])<br />
for x in range(1, n + 1):<br />
i = Table.index(x)<br />
Pi[i] = PiStat[x]<br />
for Dir in [-1, 1]:<br />
if PiStat[x + Dir] > 0:<br />
j = Table.index(x + Dir)<br />
PTrans[i, j] = min(1.0, PiStat[x + Dir] / PiStat[x]) / 2.0<br />
PTrans[i, i] -= PTrans[i, j]<br />
Pit = np.zeros([n])<br />
Pit[0] = 1.0<br />
xvalues = []<br />
yvalues = []<br />
iter = 0<br />
while True:<br />
iter += 1<br />
Pit = np.array(Pit)<br />
Pit = Pit@PTrans<br />
TVD = sum(np.absolute(Pi - Pit) / 2.0)<br />
xvalues.append(iter / float(n ** 2 * np.log(n)))<br />
yvalues.append(TVD)<br />
if TVD < 0.1: break<br />
pylab.plot(xvalues,yvalues, label='$n =$ '+str(n))<br />
pylab.legend(loc='upper right')<br />
pylab.xlabel("$t/ n^2$ (rescaled time) ")<br />
pylab.ylabel("TVD")<br />
pylab.title("TVD for the Metropolis algorithm on the path graph of $n$ sites")<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Lectures_ICTP_2022Lectures ICTP 20222022-09-05T15:34:59Z<p>Summary: </p>
<hr />
<div>Here is material about the four lectures entitled "Markov-chain Monte Carlo" that I gave at the ICTP Trieste (Italy) in September 2022 <br />
<br />
=1: Monte Carlo algorithms - basic notions=<br />
<br />
[[TVDMetroPath.py|TVDMetroPath.py]]<br />
<br />
[[TVDMetroLift.py|TVDMetroLift.py]]<br />
<br />
[[CardShuffle.py|CardShuffle.py]]<br />
<br />
[[CouplingFromThePast.py|CouplingFromThePast.py]]<br />
<br />
[[ThermoIntegration.py|ThermoIntegration.py]]<br />
<br />
[[TVDTemperingRev.py|TVDTemperingRev.py]]<br />
<br />
[[TVDTemperingLift.py|TVDTemperingLift.py]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Li_Nishikawa_et_al_2022Li Nishikawa et al 20222022-07-24T15:10:36Z<p>Summary: </p>
<hr />
<div>'''B. Li, Y. Nishikawa, P. Hoellmer, L. Carillo, A. C. Maggs, W. Krauth A. C. Maggs, W. Krauth''' '''''Hard-disk pressure computations -- a historic perspective''''' ''' J. Chem. Phys 157. 234111 (2022)'''<br />
<br />
'''Abstract'''<br />
We discuss historic pressure computations for the hard-disk model performed since 1953, and compare them to results that we obtain with a powerful event-chain Monte Carlo and a massively parallel Metropolis algorithm. Like other simple models in the sciences, such as the Drosophila model of biology, the hard-disk model has needed monumental effort to be understood. In particular, we argue that the difficulty of estimating the pressure has not been fully realized in the decades-long controversy over the hard-disk phase-transition scenario. We present the physics of the hard-disk model, the definition of the pressure and its unbiased estimators, several of which are new. We further treat different sampling algorithms and crucial criteria for bounding mixing times in the absence of analytical predictions. Our definite results for the pressure, for up to one million disks, may serve as benchmarks for future sampling algorithms. A synopsis of hard-disk pressure data as well as different versions of the sampling algorithms and pressure estimators are made available in an open-source repository. <br />
<br />
NB: The HistoricDisks open-source repository is [https://github.com/jellyfysh/HistoricDisks here]. <br />
<br />
[http://arxiv.org/pdf/2207.07715 Electronic version (from arXiv)]<br />
<br />
[https://doi.org/10.1063/5.0126437 Published paper] (JCP Editors' Choice 2022 collection)</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/TVDTemperingLift.pyTVDTemperingLift.py2022-03-16T21:56:41Z<p>Summary: </p>
<hr />
<div>This is the final program TVDTemperingLift.py to be discussed in my fourth lecture at the summerschool .... at the International Center for Theoretical Physics in Trieste, in September 2022.<br />
<br />
In previous lecture, we had started the discussion of the diffusion on the path graph P_n, in other words the one-dimensional linear lattice with n vertices and without periodic boundary conditions. <br />
<br />
<br />
import random<br />
import pylab<br />
import numpy as np<br />
for n in [10, 20, 40, 80, 160, 320]:<br />
ReplicaChange = 0.1<br />
const = 4.0 / n ** 2<br />
PiStat = {}<br />
Table = []<br />
<br />
for x in range(1, n + 1):<br />
Table.append((x, -1, 0))<br />
Table.append((x, 1, 0))<br />
Table.append((x, -1, 1))<br />
Table.append((x, 1, 1))<br />
#<br />
# factor of 1/4 because the total must be normalized, with two liftings and<br />
# two replicas <br />
#<br />
PiStat[(x, -1, 0)] = 1.0 / float(n) / 4.0<br />
PiStat[(x, 1, 0)] = 1.0 / float(n) / 4.0<br />
PiStat[(x, -1, 1)] = const * abs( (n + 1) / 2 - x) / 4.0<br />
PiStat[(x, 1, 1)] = const * abs( (n + 1) / 2 - x) / 4.0<br />
PiStat[(0, -1, 0)] = 0.0 <br />
PiStat[(0, 1, 0)] = 0.0 <br />
PiStat[(0, -1, 1)] = 0.0<br />
PiStat[(0, 1, 1)] = 0.0<br />
PiStat[(n + 1, -1, 0)] = 0.0<br />
PiStat[(n + 1, 1, 0)] = 0.0<br />
PiStat[(n + 1, -1, 1)] = 0.0<br />
PiStat[(n + 1, 1, 1)] = 0.0<br />
Position = (1, 1, 0)<br />
PTrans = np.zeros((4 * n, 4 * n))<br />
Pi = np.zeros([4 * n])<br />
for x in range(1, n + 1):<br />
for Dir in [-1, 1]:<br />
for Rep in [0, 1]: <br />
i = Table.index((x, Dir, Rep))<br />
Pi[i] = PiStat[(x, Dir, Rep)]<br />
k = Table.index((x, -Dir, Rep))<br />
if PiStat[(x + Dir, Dir, Rep)] > 0.0:<br />
j = Table.index((x + Dir, Dir, Rep))<br />
PTrans[i, j] = min(1.0, PiStat[(x + Dir, Dir, Rep)] / PiStat[(x, Dir, Rep)])<br />
PTrans[i, k] = 1.0 - PTrans[i, j]<br />
else:<br />
PTrans[i, k] = 1.0 <br />
<br />
PReplica = np.zeros((4 * n, 4 * n)) <br />
for x in range(1, n + 1):<br />
for Dir in [-1, 1]:<br />
i = Table.index((x, Dir, 0))<br />
j = Table.index((x, Dir, 1))<br />
PReplica[i, j] = ReplicaChange * min(1.0, PiStat[(x, Dir, 1)] / PiStat[(x, Dir, 0)])<br />
PReplica[i, i] = 1.0 - PReplica[i, j]<br />
PReplica[j, i] = ReplicaChange * min(1.0, PiStat[(x, Dir, 0)] / PiStat[(x, Dir, 1)])<br />
PReplica[j, j] = 1.0 - PReplica[j, i]<br />
PResampling = np.zeros((4 * n, 4 * n))<br />
for x in range(1, n + 1):<br />
for Dir in [-1, 1]: <br />
i = Table.index((x, Dir, 0))<br />
j = Table.index((x, -Dir, 0))<br />
k = Table.index((x, Dir, 1))<br />
l = Table.index((x, -Dir, 1))<br />
PResampling[i, j] = 1.0 / (4.0 * n)<br />
PResampling[i, i] = 1 - 1. / (4.0 * n)<br />
PResampling[k, l] = 1.0 / (4.0 * n)<br />
PResampling[k, k] = 1 - 1. / (4.0 * n)<br />
<br />
P = PTrans @ PReplica @ PResampling<br />
Pit = np.zeros([4 * n])<br />
Pit[0] = 1.0<br />
xvalues = []<br />
yvalues = []<br />
iter = 0<br />
while True:<br />
iter += 1<br />
Pit = Pit @ P<br />
TVD = sum(np.absolute(Pi - Pit) / 2.0)<br />
xvalues.append(iter / float(n))<br />
yvalues.append(TVD)<br />
if TVD < 0.1: break<br />
pylab.plot(xvalues,yvalues, label='$n =$ '+str(n))<br />
pylab.legend(loc='upper right')<br />
pylab.xlabel("$t/ n$ (rescaled time) ")<br />
pylab.ylabel("TVD")<br />
pylab.title("TVD lift tempering on the path graph of $n$ sites")<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/TemperingConductance.pyTemperingConductance.py2022-03-16T16:11:15Z<p>Summary: </p>
<hr />
<div>This is the algorithm TemperingConductance.py, which computes the conductance of the Hildebrand problem.It was presented in the 8th lecture of the 2022 course "Advanced topics in MCMC"<br />
<br />
<br />
# Computing the conductance of the Metropolis algorithm for the V-shaped<br />
# stationary distribution. This, for the moment, is through complete enumeration<br />
# of the power set of the set of all samples. A rigorous argument can be made<br />
# also.<br />
#<br />
import random<br />
import numpy as np<br />
import scipy.linalg as la<br />
from itertools import chain, combinations<br />
<br />
def powerset(iterable): #powerset, but without the empty set.<br />
s = list(iterable)<br />
return chain.from_iterable(combinations(s, r) for r in range(1, len(s) + 1))<br />
<br />
<br />
def OutsideFlow(S):<br />
Flow = 0.0<br />
Weight = 0.0<br />
for i in S:<br />
Weight += PiStat[Table[i]]<br />
for j in range(2 * n):<br />
if j not in S:<br />
Flow += PiStat[Table[i]] * P[i, j]<br />
return Weight, Flow / Weight<br />
<br />
n = 8<br />
for ReplicaChange in [k/100 for k in range(50 + 1)]:<br />
const = 4.0 / n ** 2<br />
PiStat = {}<br />
Table = []<br />
<br />
for x in range(1, n + 1):<br />
Table.append((x, 0))<br />
Table.append((x, 1))<br />
#<br />
# factor of 1/2 because the total must be normalized<br />
#<br />
PiStat[(x, 0)] = 1.0 / float(n) / 2.0<br />
PiStat[(x, 1)] = const * abs( (n + 1) / 2 - x) / 2.0<br />
PiStat[(0, 0)] = 0.0<br />
PiStat[(0, 1)] = 0.0<br />
PiStat[(n + 1, 0)] = 0.0<br />
PiStat[(n + 1, 1)] = 0.0<br />
Position = (1, 0)<br />
PTrans = np.eye(2 * n)<br />
for x in range(1, n + 1):<br />
for Rep in [0, 1]:<br />
i = Table.index((x, Rep))<br />
for Dir in [-1, 1]:<br />
if PiStat[(x + Dir, Rep)] > 0.0:<br />
j = Table.index((x + Dir, Rep))<br />
PTrans[i, j] = min(1.0, PiStat[(x + Dir, Rep)] / PiStat[(x, Rep)]) / 2.0<br />
PTrans[i, i] -= PTrans[i, j]<br />
PReplica = np.zeros((2 * n,2 * n))<br />
for x in range(1, n + 1):<br />
i = Table.index((x,0))<br />
j = Table.index((x,1))<br />
PReplica[i, j] = ReplicaChange * min(1.0, PiStat[(x, 1)] / PiStat[(x, 0)])<br />
PReplica[i, i] = 1.0 - PReplica[i, j]<br />
PReplica[j, i] = ReplicaChange * min(1.0, PiStat[(x, 0)] / PiStat[(x, 1)])<br />
PReplica[j, j] = 1.0 - PReplica[j, i]<br />
P = PTrans @ PReplica<br />
#<br />
# compute the conductance.<br />
#<br />
x = powerset([i for i in range(2 * n)])<br />
MinFlow = 100.00<br />
for i in x:<br />
S = set(list(i))<br />
Weight, Flow = OutsideFlow(S)<br />
if Weight <= 0.5:<br />
if Flow < MinFlow:<br />
MinFlow = Flow<br />
MinS = S<br />
print(MinS, MinFlow, '2 / 2n + 1 / n^2 = ', 1.0 / (2.0 * n) + 1.0 / (n ** 2))</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/KingsCollege_Masterclass_2022KingsCollege Masterclass 20222022-03-04T00:25:22Z<p>Summary: </p>
<hr />
<div>Masterclass I gave 28 February - 3 March 2022, at King's College London (Great Britain).<br />
This is the first "présentiel" lecture course I've given outside of ENS since 2019.<br />
<br />
Markov-chain Monte Carlo: A modern primer<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/b/b7/KCL_2022_1.1.pdf 1st lecture]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/f/ff/KCL_2022_1.2.pdf 2nd lecture]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/4/48/KCL_2.1.pdf 3rd lecture]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/21/KCL_2.2.pdf 4th lecture]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/d/de/KCL_3.1.pdf 5th lecture]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/7/70/KCL_3.2.pdf 6th lecture]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Advanced_topics_MCMC_2022Advanced topics MCMC 20222022-03-04T00:18:27Z<p>Summary: </p>
<hr />
<div>This is the homepage of my 2022 lecture course on Advanced topics in Markov-chain Monte Carlo<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/6/67/CM_2022_1.1.pdf Here is the CM1.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/d/d6/CM_2022_1.2.pdf Here is the CM1.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/4/45/TD01_ICFP_ADV_MCMC.pdf Here is the TD1] <br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/5/58/CM_2022_2.1.pdf Here is the CM2.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/6/68/CM_2022_2.2.pdf Here is the CM2.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/5/58/TD02_ICFP_ADV_MCMC.pdf Here is the TD2] <br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/3/34/CM_2022_3.1.pdf Here is the CM3.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/f/f1/CM_2022_3.2.pdf Here is the CM3.2]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/f/f0/TD03_ICFP_ADV_MCMC.pdf Here is the TD3] <br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/2/27/CM_2022_4.1.pdf Here is the CM4.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/c/c7/CM_2022_4.2.pdf Here is the CM4.2]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/1/1a/CM_2022_4.3.pdf Here is the CM4.3]<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/e/ea/TD04_ICFP_ADV_MCMC.pdf Here is the TD4] <br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/8/8f/CM_2022_5.1.pdf Here is the CM5.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/1/13/CM_2022_5.2.pdf Here is the CM5.2]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/e/e4/TD05_ICFP_ADV_MCMC.pdf Here is the TD5] <br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/9/9f/CM_2021_6.1.pdf Here is the CM6.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/0/0e/CM_2021_6.2.pdf Here is the CM6.2]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/a/aa/TD06_ICFP_ADV_MCMC.pdf Here is the TD6]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/b/b4/CM_2022_7.1.pdf Here is the CM7.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/4/41/CM_2022_7.2.pdf Here is the CM7.2]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/3/34/TD07_ICFP_ADV_MCMC.pdf Here is the TD7]<br />
<br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/8/83/CM_2022_8.1.pdf Here is the CM8.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/9/9e/CM_2022_8.2.pdf Here is the CM8.2]<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/b/bb/TD08_ICFP_ADV_MCMC.pdf Here is the TD8]<br />
<br />
Here is the program [[TemperingConductance.py| TemperingConductance.py]] which iterates over the powerset of Omega, in order to compute the conductance of the Simulated Tempering problem.<br />
<br />
Here is the program [[TVDTemperingLift.py| TVDTemperingLift.py]] which computes the probability distribution pi^t and the TVD with the stationary probability distribution.<br />
<br />
<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/8/83/CM_2022_9.1.pdf Here is the CM9.1] of my 2022 lecture course Advanced topics in Markov-chain Monte Carlo.<br />
<br />
[http://www.lps.ens.fr/%7Ekrauth/images/a/ae/TD09_ICFP_ADV_MCMC.pdf Here is the TD9]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Hoellmer_Maggs_Krauth_2021Hoellmer Maggs Krauth 20212021-12-16T21:07:35Z<p>Summary: </p>
<hr />
<div>'''P. Hoellmer, A. C. Maggs, W. Krauth''' '''''Hard-disk dipoles and non-reversible Markov chains ''''' ''' Journal of Chemical Physics 156, 084108 (2022)'''<br />
<br />
'''Abstract'''<br />
We benchmark event-chain Monte Carlo (ECMC) algorithms for tethered hard-disk dipoles in two dimensions in view of application of<br />
ECMC to water models in molecular simulation. We characterize the rotation dynamics of dipoles through the integrated autocorrelation<br />
times of the polarization. The non-reversible straight, reflective, forward, and Newtonian ECMC algorithms are all event-driven and only<br />
move a single hard disk at any time. They differ only in their update rules at event times. We show that they realize considerable speedups<br />
with respect to the local reversible Metropolis algorithm with single-disk moves. We also find significant speed differences among the ECMC<br />
variants. Newtonian ECMC appears particularly well-suite<br />
<br />
[http://arxiv.org/pdf/2111.11943 Electronic version (from arXiv)]<br />
<br />
[https://doi.org/10.1063/5.0080101 Journal version (subscription required)]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Maggs_Krauth_2021Maggs Krauth 20212021-10-20T09:03:46Z<p>Summary: </p>
<hr />
<div>'''A. C. Maggs, W. Krauth''' '''''Large-scale dynamics of event-chain Monte Carlo''''' ''' Physical Review E 105, 015309 (2022)'''<br />
<br />
'''Abstract'''<br />
Event-chain Monte Carlo (ECMC) accelerates the sampling of hard-sphere systems, and has been generalized to the potentials used in classical molecular simulation. Rather than imposing detailed balance on transition probabilities, the method enforces a weaker global-balance condition in order to guarantee convergence to equilibrium. In this paper we generalize the factor-field variant of ECMC to higher space dimensions. In the two-dimensional fluid phase, factor-field ECMC saturates the lower bound z=0 for the dynamical scaling exponent for local dynamics, whereas molecular dynamics is characterized by z=1 and local Metropolis Monte Carlo by z=2. In the presence of hexatic order, factor fields are not found to speed up the convergence. We indicate applications to the physics of glasses, and note that generalizations of factor fields could couple to orientational order.<br />
<br />
[http://arxiv.org/pdf/2110.09959 Electronic version (from arXiv)]<br />
<br />
[https://link.aps.org/doi/10.1103/PhysRevE.105.015309 Published version (subscription required)]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Hoellmer_Noirault_Li_Maggs_Krauth_2021Hoellmer Noirault Li Maggs Krauth 20212021-10-18T22:49:48Z<p>Summary: </p>
<hr />
<div>'''P. Hoellmer, N. Noirault, B. Li, A. C. Maggs, W. Krauth''' '''''Sparse hard-disk packings and local Markov chains''''' '''Journal of Statistical Physics 187, 31 (2021)'''<br />
<br />
'''Abstract'''<br />
We propose locally stable sparse hard-disk packings, as introduced by Böröczky, as a model for the analysis and benchmarking of Markov-chain Monte Carlo (MCMC) algorithms. We first generate such packings in a square box with periodic boundary conditions and analyze their properties. We then study how local MCMC algorithms, namely the Metropolis algorithm and several versions of event-chain Monte Carlo (ECMC), escape from configurations that are obtained by slightly reducing all disk radii by a relaxation parameter. A scaling analysis is confirmed by simulation results. We obtain two classes of ECMC, one in which the escape time varies algebraically with the relaxation parameter (as for the local Metropolis algorithm) and another in which the escape time scales as the logarithm of the relaxation parameter. We discuss the connectivity of the hard-disk sample space, the ergodicity of local MCMC algorithms, as well as the meaning of packings in the context of the NPT ensemble. Our work is accompanied by open-source, arbitrary-precision software for Böröczky packings (in Python) and for straight, reflective, forward, and Newtonian ECMC (in Go).<br />
<br />
[http://arxiv.org/pdf/2109.13343 Electronic version (from arXiv)]<br />
<br />
[https://doi.org/10.1007/s10955-022-02908-4 Journal article (open source)]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Krauth_2021Krauth 20212021-06-02T09:43:41Z<p>Summary: </p>
<hr />
<div>'''W. Krauth''' '''''Event-Chain Monte Carlo: Foundations, Applications, and Prospects''''' '''Frontiers in Physics 9, 229 (2021)'''<br />
<br />
<br />
'''Abstract''' <br />
This review treats the mathematical and algorithmic foundations of non-reversible Markov chains in the context of event-chain Monte Carlo (ECMC), a continuous-time lifted Markov chain that employs the factorized Metropolis algorithm. It analyzes a number of model applications and then reviews the formulation as well as the performance of ECMC in key models in statistical physics. Finally, the review reports on an ongoing initiative to apply ECMC to the sampling problem in molecular simulation, i.e., to real-world models of peptides, proteins, and polymers in aqueous solution.<br />
<br />
[https://www.frontiersin.org/article/10.3389/fphy.2021.663457 article (open access)]<br />
<br />
[https://doi.org/10.3389/fphy.2021.663457 doi link (open access)]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_KierfeldECMC 2021 Kierfeld2021-05-10T10:46:18Z<p>Summary: </p>
<hr />
<div>==Event-Chain simulations of liquid-crystalline hard rod systems==<br />
<br />
'''''David Mueller, Clemens Vorsmann, Tobias Kampmann, <u>Jan Kierfeld</u>'''''<br />
<br />
<br />
'''Abstract''' We discuss event-chain (EC) simulations of various hard rod and hard needle<br />
systems as well as mixed systems of hard disks suspended in hard needles.<br />
The EC technique is applied to the end points of the<br />
hard needles or rods. Hard rod collisions then give rise to 3-particle<br />
interactions among the end points, and the lifting framework has<br />
to be generalized accordingly. ECMC simulations of <br />
hard needles in two dimensions and hard spherocylinders in three dimensions<br />
reproduce the correct phase behavior. ECMC is particularly suited for<br />
simulations of the effective interactions between hard colloids suspended<br />
in nematic hard rods, such as two-dimensionsal hard disks<br />
suspended in hard needles. Here, we find interesting effective<br />
interactions, which give rise to chaining of the hard disks<br />
along the nematic director and which are arising from a strong<br />
depletion attraction.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/5/52/ECMC_2021_Kierfeld.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_SuwaECMC 2021 Suwa2021-05-10T10:45:01Z<p>Summary: </p>
<hr />
<div>==Directed worm algorithm==<br />
<br />
'''Hidemaro Suwa'''<br />
<br />
'''''Department of Physics, The University of Tokyo, Tokyo (Japan)'''''<br />
<br />
'''Abstract''' The worm algorithm is a versatile technique in the Markov chain Monte Carlo method for both classical and quantum systems. The algorithm substantially alleviates critical slowing down and reduces the dynamic critical exponents of various classical systems. The worm update can be viewed as a nontrivial two-particle (kink) problem where the diffusivity is a key to efficient sampling. Using the geometric allocation approach to optimize the worm scattering process, we have proposed a directed worm algorithm that significantly improves the computational efficiency----Worm backscattering is averted, and forward scattering is favored [1]. Our approach successfully enhances the diffusivity of the worm head (kink), being approximately 25 times as efficient as the conventional worm update for the simple cubic lattice model. Surprisingly, our algorithm is even more efficient than the Wolff cluster algorithm, one of the best update algorithms. We estimate the dynamic critical exponent of the simple cubic lattice Ising model to be z=0.27 in the worm update. Our approach can be applied to a wide range of physical systems, such as the phi^4 model, the Potts model, the O(n) loop model, lattice QCD, and frustrated spin systems in combination with the dual worm formalism.<br />
<br />
[1] Hidemaro Suwa, Phys. Rev. E 103, 013308 (2021)<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/b/b3/ECMC_2021_Suwa.pdf| here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_MaggsECMC 2021 Maggs2021-05-10T10:41:16Z<p>Summary: </p>
<hr />
<div>==Exploring factorizations in the event chain algorithm==<br />
<br />
'''Anthony Maggs, Ze Lei, Werner Krauth'''<br />
<br />
'''''CNRS-ESPCI, Paris (France)'''''<br />
<br />
'''Abstract''' Event chain algorithms give us the freedom to split potentials into non-equivalent forms displaying very different large scale behaviour in their time evolution. All splittings guarantee convergence to the equilibrium Boltzmann distribution. We study the dynamics of different splittings and show that good choices can lead to an accelerated sampling of the equilibrium state.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/0/09/ECMC_2021_Maggs.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_HoellmerECMC 2021 Hoellmer2021-05-10T10:39:11Z<p>Summary: </p>
<hr />
<div>==Dipole rotations in non-reversible Markov Chains==<br />
<br />
'''<u>Philipp Höllmer</u> , A. C. Maggs, Werner Krauth''' <br />
<br />
'''''Bethe Center for Theoretical Physics, University of Bonn, Bonn (Germany)'''''<br />
<br />
'''''CNRS UMR7083, ESPCI Paris, PSL Research University, Paris (France)'''''<br />
<br />
'''''Laboratoire de Physique de l’Ecole normale supérieure, Paris (France)'''''<br />
<br />
'''Abstract''' In recent years, several algorithms belonging to the family of event-chain <br />
Monte Carlo (ECMC) have been proposed. They share the concept of particle and <br />
displacement lifting where an active particle is displaced on a non-interacting <br />
trajectory in a lifted direction. The individual algorithms differ, however, in <br />
the update scheme of the lifting variables in the events that are required to <br />
construct a non-reversible continuous time Markov chain. We study straight ECMC <br />
with direction sweeps, reflective ECMC, Newtonian ECMC, and forward ECMC for <br />
many simplified two-dimensional extended flexible dipoles that resemble water <br />
molecules in the context of molecular simulation models. Here, we point out <br />
possible pitfalls of the straight and reflective versions. We show that the <br />
dynamics of the polarization of many dipoles are analogous to a simple Gaussian <br />
random walk and a path integral. The polarization's integrated autocorrelation <br />
times show that straight ECMC, which was the superior variant for the problem <br />
of two-dimensional hard disks, rotates the dipoles slowly in comparison to the <br />
other methods. In comparison to a reversible Metropolis algorithm, the <br />
optimal ECMC algorithm yields a speedup that increases for lower densities up <br />
to a factor of $60$, which thus motivates its application to systems of <br />
long-range interacting extended molecules at the core of the JeLLyFysh project.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/d/d5/ECMC_2021_Hoellmer.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_MugitaECMC 2021 Mugita2021-05-10T10:38:00Z<p>Summary: </p>
<hr />
<div>==Non-equilibrium response and slow equilibration in hard disk systems==<br />
<br />
'''<u>Daigo Mugita</u> and Masaharu Isobe'''<br />
<br />
'''''Nagoya Institute of Technology, Nagoya (Japan)'''''<br />
<br />
'''Abstract''' The relaxation from a non-equilibrium state to equilibrium depends on the methodologies and situations. The mechanism at the microscopic level has not been well investigated. We focus on the non-equilibrium response during the equilibration process induced by a disturbance of homogeneous expansion in the simple hard disk systems. The large-scale simulation by event-driven molecular dynamics revealed that the anomalous slow equilibration toward the liquid states emerges when we start from the co-existence phase. The origin of the mechanism of the slow decay is investigated using the probability distribution of local density and orientational order parameter, where inhomogeneity of them seems to cause the anomalous slow equilibration. The results by other methods such as event-chain Monte Carlo and Newtonian event-chain Monte Carlo are also presented.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/8/84/ECMC_2021_Mugita.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_LiECMC 2021 Li2021-05-10T10:36:50Z<p>Summary: </p>
<hr />
<div>==Progress in parallelization of ECMC in 2D hard-sphere system==<br />
<br />
'''Botao Li'''<br />
<br />
'''''LPENS, Ecole normale supérieure, Paris (France)'''''<br />
<br />
We present a multithreaded version of the event-chain Monte Carlo (ECMC) for 2D hard-sphere systems (arxiv:2004.11040, DOI: 10.1016/j.cpc.2020.107702). The current implementation is based on the straight event-chain (SEC) algorithm. In the multithreaded ECMC, multiple spheres move simultaneously on different threads. The correctness of the multithreaded algorithm is guaranteed by comparing the algorithm with its singlethreaded version, which satisfies the global-balance condition explicitly. When ignoring synchronization, the multithreaded ECMC processes 10^12 collisions per hour on a x86 CPU with 20 physical cores. However, the correctness of this algorithm requires rather frequent synchronization. This prevents the multithreaded ECMC from reaching its full potential. Several attempts has been made to fix this problem.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/c/cb/ECMC_2021_Li.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_MonemvassitisECMC 2021 Monemvassitis2021-05-10T10:29:17Z<p>Summary: </p>
<hr />
<div>==Ergodicity in bidimensional sphere systems==<br />
<br />
'''<u>Athina Monemvassitis</u> A. Guillin, M. Michel, S. Monteil'''<br />
<br />
'''''Laboratoire de Mathématiques Blaise Pascal, Université Clermont Auvergne (France)'''''<br />
<br />
<br />
'''Abstract''' Event-chain Monte Carlo (ECMC) algorithms are fast irreversible Markov-chain Monte Carlo methods. Developed first in hard sphere systems [1], they rely on the breaking of the detailed balance condition to speed up the sampling with respect to their reversible counterparts. However, to insure the sampling of the right target distribution they need the property of ergodicity. I will present a proof of ergodicity (resp. connectivity) for bidimensional soft (resp. hard) sphere systems, casting the ECMC methods in the framework of Piecewise Deterministic Markov Processes [2].<br />
<br />
[1] Bernard, Etienne P. and Krauth, Werner and Wilson, David B. "Event-chain Monte Carlo algorithms for hard-sphere systems." Phys. Rev. E, vol. 80, 2009.<br />
<br />
[2] Davis, M. H. A. “Piecewise-Deterministic Markov Processes: A General Class of Non-Diffusion Stochastic Models.” Journal of the Royal Statistical Society. Series B (Methodological), vol. 46, no. 3, 1984.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/1/13/ECMC_2021_Monemvassitis.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_WeiseECMC 2021 Weise2021-05-10T10:27:44Z<p>Summary: </p>
<hr />
<div>==Event-Chain Simulations of Bundled Polymer Networks==<br />
<br />
<br />
'''<u>Lukas Weise</u>, Tobias Kampmann, Jan Kierfeld'''<br />
<br />
'''''University of Dortmund, Dortmund (Germany)'''''<br />
<br />
'''Abstract''' We simulate systems of attractive Semiflexible Harmonic Chain (SHC) polymers <br />
with the PolyEC framework in quasi-two dimensions. An isotropic initialization <br />
of the system evolves into a network of densely packed bundles of polymers. The <br />
resulting cellular structure aims to minimize the overall bundle length which <br />
gives rise to coarsening dynamics reminiscent of the behavior of foams. <br />
Measurement of the tension within the bundles allows to derive scaling laws <br />
describing the coarsening process. Deviations from a foam like structure are <br />
observed for the angles between bundles of the network as a consequence of the <br />
non-negligible bending stiffness.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/7/73/ECMC_2021_Weise.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_IsobeECMC 2021 Isobe2021-05-10T10:25:45Z<p>Summary: </p>
<hr />
<div>==Equilibration in Event-Chain MC, Event-Driven MD, and other methods==<br />
<br />
'''Masaharu Isobe'''<br />
<br />
'''''Nagoya Institute of Technology, Nagoya (Japan)'''''<br />
<br />
'''Abstract''' Recent progress on the methodologies on Event-Chain Monte Carlo (ECMC) <br />
and its variants invoke fascinating research topics; that is, which<br />
algorithm does the fast to equilibrate the systems efficiently. <br />
Every method discussed here generates the same equilibrium states <br />
from arbitrary initial states in the long-time limit in principle. <br />
However, equilibration in CPU time strongly depends on the situations<br />
and methods, where its microscopic mechanisms, especially in two <br />
or higher dimensions, are still elusive. We are investigating them<br />
systematically in the various settings with the simple well-known <br />
hard disk/sphere systems. We plan to address the past and ongoing <br />
works of efficiency of Event-Chain MC by comparing with Event-Driven MD <br />
and other methods. A summary will be presented at the workshop.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/9/91/ECMC_2021_Isobe.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_KampmannECMC 2021 Kampmann2021-05-10T10:21:23Z<p>Summary: </p>
<hr />
<div>==Event-Chain-Rattling==<br />
<br />
'''<u>Tobias Kampmann</u>, Jan Kierfeld'''<br />
<br />
'''''University of Dortmund, Dortmund (Germany)'''''<br />
<br />
'''Abstract''' We present our Event-Chain-Framework PolyEC, which is suitable for simulation of various colloidal systems. We focus on modularity and extensibility to handle heterogenous systems. Besides the simulation event-chain specific properties allow for the efficient generation of initial configuration of systems with hard interactions like hard spheres. We show how this so-called EC-Rattling can generate overlap-free initial configurations in a polymer melt efficiently. Due to the slow reptational dynamics polymer melts are especially hard to equilibrate. Our scheme creates initial configurations very close to the equilibrium and can compete with state-of-the-art techniques.<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/d/d7/ECMC_2021_Kampmann.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ECMC_2021_EngelECMC 2021 Engel2021-05-10T10:17:12Z<p>Summary: </p>
<hr />
<div>==Newtonian Event-Chain Monte Carlo with Spheres and Polyhedra==<br />
<br />
<br />
'''Marco Klement, <u>Michael Engel</u>'''<br />
<br />
'''''Institute for Multiscale Simulation, Friedrich-Alexander University Erlangen-Nürnberg (Germany)'''''<br />
<br />
'''Abstract''' Event-driven molecular dynamics is efficient because its Newtonian dynamics equilibrates fluctuations with the speed of sound. Monte Carlo simulation is efficient if performed with correlated position updates in event chains. Here, we combine the core concepts of both into a new algorithm involving Newtonian event chains. We implement Newtonian event-chain Monte Carlo (NEC) for spheres [1] and convex polyhedra [2] in the open source general-purpose particle simulation toolkit HOOMD-blue for serial and parallel simulation. For polyhedra, our implementation makes use of an improved computational geometry algorithm XenoSweep, which predicts sweep collision in a particularly simple way. The speed-up of NEC over Monte Carlo is between a factor of 10 for spheres or nearly spherical polyhedra and a factor of 2 for highly aspherical polyhedra.<br />
<br />
[1] M. Klement, M. Engel, J. Chem. Phys. 150, 174108 (2019)<br />
<br />
[2] M, Klement, S. Lee, J.A. Anderson, M. Engel, arXiv:2104.06829 (2021)<br />
<br />
----<br />
<br />
'''Slides''' [http://www.lps.ens.fr/%7Ekrauth/images/d/d8/ECMC_2021_Engel.pdf here]<br />
<br />
'''Recording'''<br />
<br />
'''Further material'''<br />
<br />
----<br />
<br />
[[Workshop_ECMC_11_May_2021|back to 2021 ECMC workshop]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Workshop_ECMC_11_May_2021Workshop ECMC 11 May 20212021-05-10T09:02:35Z<p>Summary: </p>
<hr />
<div>==Online workshop on ECMC and related subjects==<br />
<br />
===Tuesday 11 May 2021 8:30am - 12:30pm (Paris time)===<br />
<br />
{|border="1"<br />
|-<br />
!8:30-8:35 <br />
!colspan="3"|Welcome check-in(stallation) e-coffee<br />
|-<br />
!8:35-9:50 <br />
!colspan="3"|First series of talks (Session chair: Synge Todo (Tokyo))<br />
|-<br />
|8:35-9:00 <br />
|Michael Engel <br />
|Erlangen <br />
|[[ECMC_2021_Engel|Newtonian Event-Chain Monte Carlo with Spheres and Polyhedra]] [http://www.lps.ens.fr/%7Ekrauth/images/d/d8/ECMC_2021_Engel.pdf (pdf)]<br />
|-<br />
|9:00-9:25 <br />
|Tobias Kampmann <br />
|Dortmund <br />
|[[ECMC_2021_Kampmann|Event-Chain-Rattling]] [http://www.lps.ens.fr/%7Ekrauth/images/d/d7/ECMC_2021_Kampmann.pdf (pdf)]<br />
|-<br />
|9:25-9:50 <br />
|Masaharu Isobe <br />
|Nagoya <br />
|[[ECMC_2021_Isobe|Equilibration in Event-Chain MC, Event-Driven MD, and other methods]] [http://www.lps.ens.fr/%7Ekrauth/images/9/91/ECMC_2021_Isobe.pdf (pdf)]<br />
|-<br />
!9:50-10:00<br />
!colspan="3"|Discussion - e-coffee - break - breakout<br />
|-<br />
!10:00-11:00 <br />
!colspan="3"|Short talks (Session chair: Manon Michel (Clermont-Ferrand))<br />
|-<br />
|10:00-11:12 <br />
|Lukas Weise <br />
|Dortmund <br />
|[[ECMC_2021_Weise|Event-Chain Simulations of Bundled Polymer Networks]] [http://www.lps.ens.fr/%7Ekrauth/images/7/73/ECMC_2021_Weise.pdf (pdf)]<br />
|-<br />
|10:12-10:24 <br />
|Athina Monemvassitis <br />
|Clermont-Ferrand <br />
|[[ECMC_2021_Monemvassitis|Ergodicity in bidimensional sphere systems]] [http://www.lps.ens.fr/%7Ekrauth/images/1/13/ECMC_2021_Monemvassitis.pdf (pdf)]<br />
|-<br />
|10:24-10:36 <br />
|Botao Li <br />
|Paris <br />
|[[ECMC_2021_Li|Progress in parallelization of ECMC in 2D hard-sphere system]] [http://www.lps.ens.fr/%7Ekrauth/images/c/cb/ECMC_2021_Li.pdf (pdf)]<br />
|-<br />
|10:36-10:48 <br />
|Daigo Mugita <br />
|Nagoya <br />
|[[ECMC_2021_Mugita|Non-equilibrium response and slow equilibration in hard disk systems]] [http://www.lps.ens.fr/%7Ekrauth/images/8/84/ECMC_2021_Mugita.pdf (pdf)]<br />
|-<br />
|10:48-11:00 <br />
|Philipp Hoellmer <br />
|Bonn <br />
|[[ECMC_2021_Hoellmer|Dipole rotations in non-reversible Markov Chains]] [http://www.lps.ens.fr/%7Ekrauth/images/d/d5/ECMC_2021_Hoellmer.pdf (pdf)]<br />
|-<br />
!11:00-11:10 <br />
!colspan="3"|Discussion - e-coffee - break - <br />
|-<br />
!11:10-12:25<br />
!colspan="3"|Second series of talks (Session chair: Michael Engel (Erlangen))<br />
|-<br />
|11:10-11:35 <br />
|Anthony C. Maggs <br />
|Paris <br />
|[[ECMC_2021_Maggs|Exploring factorizations in the event chain algorithm]] [http://www.lps.ens.fr/%7Ekrauth/images/0/09/ECMC_2021_Maggs.pdf (pdf)]<br />
|-<br />
|11:35-12:00 <br />
|Hidemaro Suwa <br />
|Tokyo <br />
|[[ECMC_2021_Suwa|Directed worm algorithm]] [http://www.lps.ens.fr/%7Ekrauth/images/b/b3/ECMC_2021_Suwa.pdf (pdf)]<br />
|-<br />
|12:00-12:25 <br />
|Jan Kierfeld <br />
|Dortmund <br />
|[[ECMC_2021_Kierfeld|Event-Chain simulations of liquid-crystalline hard rod systems]] [http://www.lps.ens.fr/%7Ekrauth/images/5/52/ECMC_2021_Kierfeld.pdf (pdf)]<br />
|-<br />
!12:25-12:30 <br />
!colspan="3"|Conclusion<br />
|-<br />
|}</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Advanced_topics_MCMC_2021Advanced topics MCMC 20212021-01-19T19:33:43Z<p>Summary: /* Week 1 (13 January 2021): Transition matrices: From the balance conditions to mixing */</p>
<hr />
<div>This is the homepage for the ICFP course: Advanced Topics in Markov-chain Monte Carlo that is running from 13 January 2021 through 17 March 2021. <br />
<br />
Lectures/TD start at 14:00 on Wednesday afternoons and last until 17:00 (on the same date ;)). <br />
<br />
We are using a mix of Lectures/inverted classroom/student participation in lectures/slack... <br />
<br />
ICFP students: Please send me email if you want to join the course, so I can send you an invitation to the slack work space MCMC-AT.<br />
<br />
Latest News:<br />
* second course on 20 January 2021 (via zoom). We will have a first student participation 'en plein CM'.<br />
<br />
==Week 1 (13 January 2021): Transition matrices: From the balance conditions to mixing==<br />
<br />
References for Week 1:<br />
<br />
* W. Krauth, "Statistical Mechanics: Algorithms and Computations" (Oxford, 2006) (for the introduction)<br />
* D. A. Levin, Y. Peres, E. L. Wilmer, "Markov Chains and Mixing Times" (American Mathematical Society, 2008)<br />
* M. Weber, "Eigenvalues of non-reversible Markov chains - A case study" ZIB report (2017)<br />
<br />
<br />
==Week 2 (20 January 2021): Transition matrices: From the balance conditions to mixing==<br />
<br />
References for Week 2:<br />
<br />
* A. Sinclair, M. Jerrum "Approximate Counting, Uniform Generation and Rapidly Mixing Markov Chains [[https://people.eecs.berkeley.edu/~sinclair/approx.pdf Information and Computation 82, 93-133 (1989)]] (We only need Lemma 3.3)</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Group_Webinar_October_14%2C_2020:_Liang_QinGroup Webinar October 14, 2020: Liang Qin2020-11-20T16:54:06Z<p>Summary: </p>
<hr />
<div>'''group Webinar Wednesday, 14 October 2020 2020, 12h00'''<br />
<br />
'''Speaker:''' '''''Liang Qin (LPENS)'''''<br />
<br />
'''Title:''' '''''Application of irreversible Monte Carlo in realistic long-range systems'''''<br />
<br />
'''Abstract:'''<br />
In this talk, I will present our study of irreversible Monte Carlo for classical long-range particle systems in the past three years, to pave the way for replacing widespread molecular dynamics in large-scale biochemical simulation. First, I will introduce methods used in the atomic simulation, and the basics of our event-chain Monte Carlo (ECMC). The second part presents the factorized Coulomb calculation and the dipole factor. Their combination leads to an O(NlogN)-per-sweep dipole sampling algorithm. In the third part, we developed a universal ECMC particle simulator, which we name JeLLyFysh. You will see a brief introduction over its usage. In part four, systems of many water molecules are simulated with the help of JeLLyFysh, and I will focus on the consequent rotational dynamics. This leads to our recent study of sequential Monte Carlo to fast relax molecular orientations.<br />
<br />
'''Reference:''' https://hal.archives-ouvertes.fr/tel-02998657<br />
<br />
[[Group_WK#Group_Webinar|Return to Group Webinars]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Bonn-Paris_Webinar_November_18_20%2C_2020:_Philipp_H%C3%B6llmerBonn-Paris Webinar November 18 20, 2020: Philipp Höllmer2020-11-20T16:48:58Z<p>Summary: </p>
<hr />
<div>'''Bonn-Paris Webinar Wednesday, 18 November 2020, 12h00'''<br />
<br />
'''Speaker:''' '''''Philipp Höllmer (University of Bonn)'''''<br />
<br />
'''Title:''' '''''Fast sequential Markov chains'''''<br />
<br />
'''Abstract:'''<br />
In this talk, I will motivate and present a non-reversible Markov-chain Monte Carlo (MCMC) algorithm for particle systems, in which the direction of motion evolves deterministically. The sequential direction-sweep MCMC can be applied to a wide range of original reversible or non-reversible Markov chains, such as the Metropolis algorithm or the event-chain Monte Carlo (ECMC) algorithm. I will discuss sequential MCMC for a simplified two-dimensional dipole model, and show how it profoundly modifies the Markov-chain trajectory. Long excursions, with persistent rotation in one direction, alternate with long sequences of rapid zigzags resulting in persistent rotation in the opposite direction. Finally, I will show that sequential MCMC is able to reduce mixing times. Here, an especially large speedup can be obtained by using a large set of possible directions instead of only allowing directions that are parallel to the cartesian axes. This renders sequential MCMC in particular useful for greatly accelerating ECMC simulations of molecular systems in the future.<br />
<br />
'''Reference:''' https://arxiv.org/abs/2007.15615<br />
<br />
'''Live Recording:''' https://youtu.be/6-3tuTbpgSk<br />
<br />
[[Group_WK#Group_Webinar|Return to Group Webinars]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ICFP_Projets_bibliographiques_2020ICFP Projets bibliographiques 20202020-09-02T08:10:17Z<p>Summary: </p>
<hr />
<div>This is the homepage for the '''ICFP Projets bibliographiques''' (library-based project) for the fall/winter semester 2020 in the ICFP master at ENS.<br />
<br />
<br />
Responsable: Werner KRAUTH [mailto:werner.krauth@ens.fr mail] Please contact me if you want to do a library-based project. I will then organize a quick personal / video meeting. <br />
<br />
'''Latest updates:'''<br />
* 02 September 2020: Website set up.<br />
* 07 September 2020: Add remarks about remote contacts<br />
<br />
==Idea of the library-based projects at ENS==<br />
<br />
* Seek out, understand, classify relevant scientific literature on a chosen topic.<br />
* Enter in contact with an exciting research topic.<br />
* Enter in contact with a junior or senior professor/researcher at ENS or another accredited institution in the Greater Paris area, see her/him repeatedly.<br />
* Find out, in this special year of COVID-19, maybe, that professors / researchers are very easy to contact and to ask question to, because everyone has gotten used to video conferences. <br />
* Present the topic both in written form and orally in a scientifically sound and pedagogical manner.<br />
* Work in a curiosity-driven mode.<br />
* Learn essential skills.<br />
* If successful: Gain 6 ECTS.<br />
<br />
==Organization==<br />
<br />
* YOU [mailto:werner.krauth@ens.fr send mail to ME] declaring that you're interested. Include in your mail what you are generally interested in, etc.<br />
* I will organize exchange and individual meetings (in person / on video) with any interested student.<br />
* YOU find a possible supervisor, and an interesting project (Come see me or other Professors---in person or via video---if you need suggestions). This process should be finished by Oct. 4, 2020. Send a one-page pdf with your name and email, supervisor name and email, title, abstract, references of papers to me. <br />
* YOU obtain written consent by ICFP staff (me), by Oct. 11, 2020 that this choice is OK.<br />
* YOU organize regular meetings or other exchange with your advisor and with me, if necessary.<br />
* YOU Immediately start reading.<br />
* YOU send intermediate half-page report by 15 Nov 2020.<br />
* YOU send in the final report by 08 January 2021 midnight.<br />
* YOU will defend your project on 19 / 20 Jan 2021.<br />
<br />
==DO's and DON'Ts==<br />
<br />
* A library-based project is (usually) about reading a few original research papers. Usually, you will have to branch out to other papers, books, etc. Usually, you will discuss with your supervisor, other professors, senior students, etc.<br />
* Original research papers can have been written anytime in the 19th, 20th or 21st century.<br />
* Some projects ask you to read and understand papers from different periods, and using different approaches (theoretical, experimental, observational, computational...).<br />
* A library-based project is normally not about a chapter or so from a text book.<br />
* A library-based project needs you to connect with your supervisor and (normally) with the ICFP staff (me), on a regular basis.<br />
* A library-based project is all about overview/context and your ability to synthesize: Besides the papers there is what professors and researchers, and especially your supervisor, tell you about them.<br />
* A library-based project is difficult, yet very exciting.<br />
* A library-based project can be the source of much frustration, and of great satisfaction.<br />
* A library-based project, just as your other courses, may influence your career choices.<br />
* A library-based project is not a programming assignment (although you might write a little code to understand things).<br />
* A library-based project is certainly not an internship (stage). This means that, for example, you should not spend three months understanding a single equation, or a single page of an article that are too difficult for you. Try to understand what the equation means: Ask your supervisor. Go beyond the equation! Understand the context!<br />
* A library-based project is not journalism: You need to understand the scientific questions quite in detail.<br />
* Plagiarism will not be tolerated, and you cannot do your library-based project with a member of your (extended) family.<br />
<br />
==Planning (provisional)==<br />
* Sep 1, 2020 - Présentation of the project by Kris van Houcke<br />
* Sep 3 - 13, 2020 - Message of intent (YOU to ME)<br />
* Sep 13 - October 5, 2020 - Mail exchange/Meetings with ICFP staff/ potential supervisors (YOU and ME)<br />
* Oct 4, 2020 - Project all set up (mail to ME, hard deadline)<br />
* Oct 11, 2020 - Acceptance/Rejection of project (ME to YOU) <br />
* Nov 16, 2020 - One-page intermediate report (YOU to ME)<br />
* Jan 8, 2020 (midnight) - Due date of report (YOU to ME) - hard deadline<br />
* Jan 20 / 21 January 2021 - Oral defense ("soutenance") of the project (YOU to JURY)<br />
<br />
==Report / Defense (soutenance)==<br />
<br />
* The report is limited to 15 pages (including references), in correct English, and in scientific style. You should have it re-read by others.<br />
* Additionally, you will provide a signed anti-plagiarism declaration. This document will be provided in due time. By this you certify that the report is entirely written by yourself, and that you cited all your sources. <br />
* The report must contain an introduction, a development, and a conclusion/outlook.<br />
* The oral presentation lasts 15 min (in English), and it must be comprehensible to ICFP Prof (me) and to the outside expert.<br />
* There are 10 min of questions. Anything you have written or said is open to scrutiny.<br />
* Besides the content of the report and the presentation, the jury may ask questions of context.</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Qin_Hoellmer_Krauth_2020Qin Hoellmer Krauth 20202020-09-01T16:30:20Z<p>Summary: </p>
<hr />
<div>'''L. Qin, P. Hoellmer, W. Krauth''' '''''Direction-sweep Markov chains''''' '''J. Phys. A: Math Theor. 55 105003 (2022)'''<br />
<br />
<br />
'''Abstract''' We discuss a non-reversible, lifted Markov-chain Monte Carlo (MCMC) algorithm for particle systems in which the direction of proposed displacements is changed deterministically. This algorithm sweeps through directions analogously to the popular MCMC sweep methods for particle or spin indices. Direction-sweep MCMC can be applied to a wide range of reversible or non-reversible Markov chains, such as the Metropolis algorithm or the event-chain Monte Carlo algorithm. For a single two-dimensional tethered hard-disk dipole, we consider direction-sweep MCMC in the limit where restricted equilibrium is reached among the accessible configurations for a fixed direction before incrementing it. We show rigorously that direction-sweep MCMC leaves the stationary probability distribution unchanged and that it profoundly modifiesthe Markov-chain trajectory. Long excursions, with persistent rotation in one direction, alternate with long sequences of rapid zigzags resulting in persistent rotation in the opposite direction in the limit of small direction increments. The mapping to a Langevin equation then yields the exact scaling of excursions while the zigzags are described through a non-linear differential equation that is solved exactly. We show that the direction-sweep algorithm can have shorter mixing times than the algorithms with random updates of directions. We point out possible applications of direction-sweep MCMC in polymer physics and in molecular simulation.<br />
<br />
[http://arxiv.org/pdf/2007.15615 Electronic version (from arXiv)]<br />
<br />
[https://doi.org/10.1088/1751-8121/ac508a doi link (subscription required)]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Group_Webinar_May_7%2C_2020:_Botao_LiGroup Webinar May 7, 2020: Botao Li2020-05-20T22:55:28Z<p>Summary: </p>
<hr />
<div>'''Group Webinar Wednesday, 7 May 2020, 10h30'''<br />
<br />
'''Speaker:''' '''''Botao Li (Laboratoire de Physique, ENS)'''''<br />
<br />
'''Title:''' '''''ECMC with local time and its parallel implementation'''''<br />
<br />
'''Abstract:'''<br />
We have discovered that, in ECMC, time needs not to be global. The concept of local time is then introduced, which allows us to abandon the central scheduler and minimize the propagation of information. Parallel ECMC algorithm, implements the idea of local time, has been designed and benchmarked on multithreaded machines. This algorithm is proved to be correct. And with a CPU of 40 logical cores, the prototype of multithreaded ECMC is more than an order of magnitude faster than its singlethreaded counterpart.<br />
<br />
'''Reference:''' https://arxiv.org/abs/2004.11040 see [[Li_Todo_Maggs_Krauth_2020|paper in Computer Physics Communications]].<br />
'''Code''' All code is open-source and available on github (see the paper).<br />
<br />
'''Live Recording:''' Not available<br />
<br />
[[Group_WK#Group_Webinar|Return to Group Webinars]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Group_Webinar_May_20%2C_2020:_Philipp_H%C3%B6llmerGroup Webinar May 20, 2020: Philipp Höllmer2020-05-20T22:29:17Z<p>Summary: </p>
<hr />
<div>'''Group Webinar Wednesday, 20 May 2020, 10h30'''<br />
<br />
'''Speaker:''' '''''Philipp Höllmer (University of Bonn)'''''<br />
<br />
'''Title:''' '''''JeLLyFysh-Version1.1 — A General-Purpose Python Application for All-Atom Event-Chain Monte Carlo'''''<br />
<br />
'''Abstract:'''<br />
The open-source JeLLyFysh Python application implements the event-chain Monte Carlo algorithm (ECMC) for a wide range of all-atom systems. This talk introduces the application’s architecture, and shows how it systematically formulates the entire time evolution of ECMC in terms of events. A number of worked out simulation examples that use, in particular, the long-ranged Coulomb interaction and the cell-veto algorithm are presented. Finally, recent improvements of the application in its newest version 1.1 are highlighted for the first time. This version marks the next step on the application’s way to a basis code that fosters the development of irreversible Markov-chain algorithm.<br />
<br />
'''Reference:''' https://doi.org/10.1016/j.cpc.2020.107168<br />
<br />
'''Live Recording:''' https://youtu.be/Io13Mh-gv5w<br />
<br />
[[Group_WK#Group_Webinar|Return to Group Webinars]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Li_Todo_Maggs_Krauth_2020Li Todo Maggs Krauth 20202020-04-24T14:26:01Z<p>Summary: </p>
<hr />
<div>'''B. Li, S. Todo, A. C. Maggs, W. Krauth''' '''''Multithreaded event-chain Monte Carlo with local times''''' '''Computer Physics Communications 261 107702 (2021)'''<br />
<br />
<br />
'''Abstract''' <br />
We present a multithreaded event-chain Monte Carlo algorithm (ECMC) for hard spheres. Threads synchronize at infrequent breakpoints and otherwise scan for local horizon violations. Using a mapping onto absorbing Markov chains, we rigorously prove the correctness of a sequential-consistency implementation for small test suites. On x86 and ARM processors, a C++ (OpenMP) implementation that uses compare-and-swap primitives for data access achieves considerable speed-up with respect to single-threaded code. The generalized birthday problem suggests that for the number of threads scaling as the square root of the number of spheres, the horizon-violation probability remains small for a fixed simulation time. We provide C++ and Python open-source code that reproduces all our results. <br />
<br />
[http://arxiv.org/pdf/2004.11040 Electronic version (from arXiv)]<br />
<br />
[https://doi.org/10.1016/j.cpc.2020.107702 DOI: 10.1016/j.cpc.2020.107702]<br />
<br />
[https://www.sciencedirect.com/science/article/abs/pii/S0010465520303453 Journal version] (requires subscription)<br />
<br />
[https://github.com/jellyfysh/ParaSpheres https://github.com/jellyfysh/ParaSpheres GitHub repository] from which the ParaSpheres programs described in the paper (in Python, Fortran, C++, and shell) may be [https://en.wikipedia.org/wiki/Fork_(software_development) forked], cloned, or simply copied.</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Wegner_2d_Exact.pyWegner 2d Exact.py2019-12-01T23:20:48Z<p>Summary: </p>
<hr />
<div> import math, cmath, random<br />
random.seed('Werner')<br />
<br />
L = 4<br />
pp = 2.0 * math.pi / L<br />
<br />
dim = 2<br />
N = L ** dim # take even, dim = 2 is assumed<br />
beta = math.sqrt(2.0)<br />
<br />
knumbers = [(k, l) for l in range(L) for k in range(L)]<br />
kminusnumbers = {}<br />
for (k, l) in knumbers:<br />
kminusnumbers[(k, l)] = ((L - k) % L, (L - l) % L)<br />
rvalues = knumbers[:]<br />
PhiR = {}<br />
neighbours = {}<br />
for rvec in rvalues:<br />
PhiR[rvec] = (random.uniform(-1.0, 1.0)) # this is an example<br />
neighbours[rvec] = [((rvec[0] + 1) % L, rvec[1]), (rvec[0], (rvec[1] + 1) % L),<br />
((rvec[0] - 1) % L, rvec[1]), (rvec[0], (rvec[1] - 1) % L)]<br />
epskvalues = {}<br />
for (kx, ky) in knumbers:<br />
epskvalues[(kx, ky)] = 4.0 * (math.sin(kx * pp / 2.0) ** 2 +<br />
math.sin(ky * pp / 2.0) ** 2)<br />
#<br />
# Energy in PhiR<br />
#<br />
Energy_real = - 2.0 * dim * N<br />
for rvec in rvalues:<br />
for rvecp in neighbours[rvec]:<br />
Energy_real += (PhiR[rvec] - PhiR[rvecp]) ** 2 / 2.0<br />
print Energy_real, 'Energy in PhiR'<br />
#<br />
# Fourier modes PhiK<br />
#<br />
PhiK = {}<br />
for kvec in knumbers:<br />
PhiK[kvec] = 0.0<br />
for rvec in rvalues:<br />
PhiK[kvec] += cmath.exp(- 1j * pp * (kvec[0] * rvec[0] + kvec[1] *<br />
rvec[1])) * PhiR[rvec] / math.sqrt(N)<br />
#<br />
# Energy in PhiK<br />
#<br />
Energy_fourier = - 2.0 * dim * N<br />
for kvec in knumbers:<br />
kvecp = kminusnumbers[kvec]<br />
Energy_fourier += epskvalues[kvec] * PhiK[kvec] * PhiK[kvecp]<br />
print Energy_fourier, ' Energy in PhiK'<br />
#<br />
# Definition of positive Brillouin zone, and of symmetric points<br />
#<br />
kdummy = knumbers[:]<br />
kdummy.remove((0, 0))<br />
bplus = []<br />
bsymm = []<br />
while kdummy != []:<br />
kvec = kdummy.pop()<br />
if kvec == kminusnumbers[kvec]:<br />
bsymm.append(kvec)<br />
else:<br />
kdummy.remove(kminusnumbers[kvec])<br />
bplus.append(kvec)<br />
print bplus, 'bplus'<br />
print bsymm, 'bsymm'<br />
PsiKpos = {}<br />
PsiKneg = {}<br />
#<br />
# Fourier modes PsiK (real valued)<br />
#<br />
for kvec in bplus:<br />
kvecp = kminusnumbers[kvec]<br />
PsiKpos[kvec] = ((PhiK[kvec] + PhiK[kvecp]) / math.sqrt(2.0)).real<br />
PsiKneg[kvec] = ((PhiK[kvec] - PhiK[kvecp]) / (math.sqrt(2.0) * 1j)).real<br />
PsiKsymm = {}<br />
for kvec in bsymm:<br />
PsiKsymm[kvec] = PhiK[kvec].real<br />
#<br />
# Energy in PsiK<br />
#<br />
Energy_PsiK = - 2.0 * dim * N<br />
for kvec in bplus:<br />
Energy_PsiK += (PsiKpos[kvec] ** 2 + PsiKneg[kvec] ** 2) * epskvalues[kvec]<br />
for kvec in bsymm:<br />
Energy_PsiK += PsiKsymm[kvec] ** 2 * epskvalues[kvec]<br />
print Energy_PsiK, ' Energy in PsiK'<br />
#<br />
# Correlation between site 0 and site k<br />
#<br />
print 'correlations using PhiR and PsiK'<br />
for rvec in rvalues:<br />
Corr_PsiK = 0.0<br />
for kvec in bplus:<br />
kr = (kvec[0] * rvec[0] + kvec[1] * rvec[1]) * pp<br />
Corr_PsiK += \<br />
(PsiKpos[kvec] * (1.0 - math.cos(kr)) + PsiKneg[kvec] * math.sin(kr)) \<br />
* math.sqrt(2.0 / N)<br />
for kvec in bsymm:<br />
kr = (kvec[0] * rvec[0] + kvec[1] * rvec[1]) * pp<br />
Corr_PsiK += \<br />
PsiKsymm[kvec] * (1.0 - math.cos(kr)) * math.sqrt(1.0 / N)<br />
<br />
print rvec, Corr_PsiK, PhiR[(0, 0)] - PhiR[rvec], ' r, Fourier term, direct term'<br />
#<br />
# Expectation value of correlation<br />
#<br />
xdata = []<br />
ydata = []<br />
for rvec in rvalues:<br />
summe = 0.0<br />
for kvec in bplus:<br />
kr = (kvec[0] * rvec[0] + kvec[1] * rvec[1]) * pp<br />
summe += math.sin(kr / 2) ** 2 / epskvalues[kvec]<br />
for kvec in bsymm:<br />
kr = (kvec[0] * rvec[0] + kvec[1] * rvec[1]) * pp<br />
summe += (1 - math.cos(kr)) ** 2 / (8.0 *epskvalues[kvec])<br />
print rvec, math.exp(- 2.0 * summe / N / beta)<br />
ydata.append( math.exp(- 2.0 * summe / N / beta))<br />
xdata.append(rvec)</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Wegner_1d_Exact.pyWegner 1d Exact.py2019-12-01T23:18:24Z<p>Summary: </p>
<hr />
<div> import math, cmath, random<br />
random.seed('Werner')<br />
<br />
N = 8 # take even<br />
beta = math.sqrt(2.0)<br />
<br />
PhiR = []<br />
for k in range(N):<br />
PhiR.append(random.uniform(-1.0, 1.0)) # this is an example<br />
<br />
rvalues = range(N)<br />
kvalues = [2.0 * math.pi / N * float(l) for l in range(N)]<br />
epskvalues = [4.0 * math.sin( k / 2.0) **2 for k in kvalues]<br />
<br />
Energy_real = -2.0 * N + (PhiR[N - 1] - PhiR[0]) ** 2<br />
for r in range(N - 1):<br />
Energy_real += (PhiR[r + 1] - PhiR[r]) ** 2<br />
<br />
print Energy_real, 'Energy of the conf. in real representation'<br />
#<br />
# Fourier transform (by hand, no FFT)<br />
#<br />
PhiK = []<br />
for fourierk in kvalues:<br />
dummy = 0.0<br />
for i in range(N):<br />
term = cmath.exp(- 1j * fourierk * rvalues[i]) * PhiR[i] / math.sqrt(N)<br />
dummy += term<br />
PhiK.append(dummy)<br />
#<br />
# real-valued Fourier coefficients, don't forget to count PsiK[N/2] = PhiK[N/2]<br />
#<br />
PsiKpos = [0.0]<br />
PsiKneg = [0.0]<br />
for k in range(1,N / 2):<br />
PsiKpos.append((PhiK[k] + PhiK[N - k]) / math.sqrt(2.0))<br />
PsiKneg.append((PhiK[k] - PhiK[N - k]) / (math.sqrt(2.0) * 1j))<br />
#<br />
# Express the energy in Fourier modes<br />
#<br />
Energy_fourier = - 2.0 * N + 4.0 * PhiK[N / 2] ** 2<br />
for k in range(1, N / 2):<br />
Energy_fourier += 2.0 * epskvalues[k] * PhiK[k] * PhiK[N - k]<br />
<br />
print Energy_fourier, 'energy of the conf. in Fourier representation'<br />
#<br />
# Energy in Fourier representation with real-valued terms<br />
#<br />
Energy_real = - 2.0 * N + 4.0 * PhiK[N / 2]**2<br />
for k in range(1, N / 2):<br />
Energy_real += (PsiKneg[k] ** 2 + PsiKpos[k] ** 2) * epskvalues[k]<br />
print Energy_real, ' energy of th conf. with real-valued Fourier coefficients'<br />
#<br />
# Correlation between site 0 and site k<br />
#<br />
print 'check correlation in real and (real Fourier)'<br />
for r in range(N):<br />
dummy = PhiK[N / 2] * (1.0 - (-1) ** r) / math.sqrt(N)<br />
for k in range(1, N / 2):<br />
term = PsiKpos[k] * (1.0 - math.cos(kvalues[k] * r)) + \<br />
PsiKneg[k] * math.sin(kvalues[k] * r)<br />
dummy += term * math.sqrt(2.0 / N)<br />
print r, dummy.real, PhiR[0] - PhiR[r], ' r, Fourier term, direct term'<br />
<br />
xdata = []<br />
ydata = []<br />
for r in range(0, N / 2 + 1):<br />
summe = (1.0 - (-1) ** r) ** 2 / (32.0)<br />
for k in range(1, N / 2):<br />
summe += math.sin(math.pi * k * r / N) ** 2 / (4.0 * math.sin(math.pi *<br />
k / N) ** 2)<br />
ydata.append( math.exp(- 2.0 * summe / N / beta))<br />
xdata.append(r)<br />
print ydata</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Wegner_1d_Direct.pyWegner 1d Direct.py2019-12-01T23:16:10Z<p>Summary: </p>
<hr />
<div> import math, cmath, random<br />
<br />
N = 8 # take even<br />
beta = math.sqrt(2.0)<br />
NIter = 500000000<br />
print NIter, 'NIter'<br />
<br />
rvalues = range(N)<br />
kvalues = [2.0 * math.pi / N * float(l) for l in range(N)]<br />
epskvalues = [4.0 * math.sin( k / 2.0) **2 for k in kvalues]<br />
correlation = [0.0] * N<br />
for iter in range(NIter):<br />
#<br />
# real-valued Fourier coefficients, don't forget to count PsiK[N/2] = PhiK[N/2]<br />
#<br />
PsiKpos = [0.0]<br />
PsiKneg = [0.0]<br />
for k in range(1, N / 2):<br />
sigma = 1.0 / math.sqrt(2.0 * beta * epskvalues[k])<br />
PsiKpos.append(random.gauss(0.0, sigma))<br />
PsiKneg.append(random.gauss(0.0, sigma))<br />
PhiK = [0] * N<br />
PhiK[0] = 0.0<br />
<br />
sigma = 1.0 / math.sqrt(8.0 * beta)<br />
PhiK[N / 2] = random.gauss(0.0, sigma)<br />
for k in range(1, N / 2):<br />
PhiK[k] = (PsiKpos[k] + 1j * PsiKneg[k]) / math.sqrt(2.0)<br />
PhiK[N - k] = (PsiKpos[k] - 1j * PsiKneg[k]) / math.sqrt(2.0)<br />
<br />
PhiR = []<br />
for r in rvalues:<br />
dummy = 0.0<br />
for k in range(N):<br />
term = cmath.exp(+ 1j * r * kvalues[k]) * PhiK[k] / math.sqrt(N)<br />
dummy += term<br />
PhiR.append(dummy.real)<br />
#<br />
# Compute correlation between site 0 and site k<br />
#<br />
for i in range(N):<br />
correlation[i] += math.cos(PhiR[0] - PhiR[i])<br />
for i in range(N):<br />
print i, correlation[i] / NIter, 'correlation with i'</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Chebychev_Hoeffding_Interval_Lecture3_ICFP_2019.pyChebychev Hoeffding Interval Lecture3 ICFP 2019.py2019-09-24T09:49:21Z<p>Summary: </p>
<hr />
<div>This is a simple Python2 program to check with which probability the confidence interval computed from the Hoeffding or Chebychev inequality actually contains the parameter theta of the Bernoulli distribution.<br />
<br />
import math, random, pylab<br />
##<br />
## theta: parameter of the Bernouilli distribution<br />
## n: number of samples<br />
## p: p value, confidence interval<br />
##<br />
n = 4000<br />
N_test = 10000<br />
p = 0.05<br />
n_bins = 100<br />
x_values = []<br />
y_values_c = []<br />
y_values_h = []<br />
for tt in range(1, n_bins):<br />
theta = tt / float(n_bins)<br />
x_values.append(theta)<br />
Cover_h = 0.0<br />
Cover_c = 0.0<br />
for iter in range(N_test):<br />
theta_hat = 0<br />
for i in range(n):<br />
Upsilon = random.random()<br />
if Upsilon < theta: theta_hat += 1.0 / n<br />
q_hat = 1 - theta_hat<br />
if abs(theta_hat - theta) < math.sqrt(-math.log(p / 2.0)/ 2.0 / n):<br />
Cover_h += 1.0 / N_test<br />
if abs(theta_hat - theta) < math.sqrt(1.0 / n / 2.0 / math.sqrt(p)):<br />
Cover_c += 1.0 / N_test<br />
y_values_c.append(Cover_c)<br />
y_values_h.append(Cover_h)<br />
print theta, p, Cover_h, Cover_c, 'theta, p, cover'<br />
pylab.title('Binomial rv: Hoeffding, & Chebychev intervals, $n=$' + str(n))<br />
pylab.ylabel('Coverage probability $95$ \% standard interval')<br />
pylab.xlabel('Bernoulli probability $ \\theta $')<br />
pylab.plot(x_values, y_values_c, label = 'Chebychev')<br />
pylab.plot(x_values, y_values_h, label = 'Hoeffding')<br />
pylab.legend(loc='lower right')<br />
pylab.savefig('HoeffdingChebychevInterval_p.png')<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/Wald_Interval_Lecture3_ICFP_2019.pyWald Interval Lecture3 ICFP 2019.py2019-09-24T09:42:52Z<p>Summary: </p>
<hr />
<div>Here is a simple Python2 program that checks the validity of the Wald interval for p=0.05 (corresponding to 1.96 sigma of the Gaussian distribution)<br />
<br />
import math, random, pylab<br />
#<br />
# n: number of terms in the sum of Bernoulli variables <br />
# N_test: number of realizations of the sum of Bernoulli trials<br />
# theta: parameter of Bernoulli distribution<br />
# n_bin: Number of bins for theta<br />
# Here we use the value 1.96, appropriate for p=0.05<br />
#<br />
n = 100<br />
n_bins = 1000<br />
N_test = 10000<br />
x_values = []<br />
y_values = []<br />
for tt in range(2, n_bins - 1):<br />
theta = tt / float(n_bins)<br />
x_values.append(theta)<br />
Cover = 0.0<br />
for iter in range(N_test):<br />
theta_hat = 0.0<br />
for i in range(n):<br />
Upsilon = random.random()<br />
if Upsilon < theta: theta_hat += 1.0 / n<br />
if abs(theta_hat - theta) < 1.96 * math.sqrt(theta_hat * max(1 -<br />
theta_hat, 0.0) / n): <br />
Cover += 1.0 / N_test<br />
y_values.append(Cover)<br />
print theta, Cover, 'theta, Cover'<br />
pylab.title('Binomial proportion "standard 0.05 interval" $n = 100$')<br />
pylab.plot(x_values, y_values)<br />
pylab.xlabel('Bernoulli probability $ \\theta $')<br />
pylab.ylabel('Coverage probability $95$ \% standard interval')<br />
pylab.ylim(0.8,1.0)<br />
pylab.savefig('StandardInterval_test.png')<br />
pylab.show()</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ICFP_Stat_Physics_2019_infosICFP Stat Physics 2019 infos2019-08-31T22:18:35Z<p>Summary: /* Planning */</p>
<hr />
<div>[[ICFP_Stat_Physics_2019| Return to ICFP Statistical Physics main page]]<br />
<br />
'''Further infos for the ICFP course'''<br />
<br />
==Title of the course== <br />
Statistical Physics: Concepts and applications<br />
<br />
==Scope of the course==<br />
This lecture course on statistical mechanics will take students from the foundations of probability theory and statistical inference to the important models and the central concepts and techniques of statistical mechanics. The main focus will be on equilibrium and on classical systems, but we will also treat transport and dissipation, and discuss quantum statistical mechanics for Boson systems and quantum spin models.<br />
==Organization, grading==<br />
There will be 15 lectures and 14 tutorial sessions, 9 homeworks (with solutions, not graded), and one written intermediate (30%) and one written final exams (70%).<br />
<br />
==Planning==<br />
'''Lectures (CM) and tutorials (TD)''': Each Monday morning, from 09 September 2018 through 20 January 2019 () (Lectures: 8:30 - 9:25 am, 9:35 - 10:30 am; tutorials: 10:45 - 11:40 am, 11:50 am - 12:45 pm).<br />
'''Attention: No CM/TD on 28 October 2019, 11 November 2019, 23 December 2019, 30 December 2019'''<br />
<br />
'''Intermediate exam''': Monday, 18 November 2019 10:30 - 12:30 in room L357/359, third floor of 24 rue Lhomond. <br />
'''Final exam''': Monday, 20 January 2020 09:00 - 12:00 in room Conf IV, second floor of 24 rue Lhomond (orange corridor).<br />
<br />
<!--For your information, you can [http://www.lps.ens.fr/%7Ekrauth/images/a/a3/Exam_icfp_2015_16.pdf study here] the exam I gave to the 2015/16 class. Subjects of this exam were: Basic statistics: MLE (Gaussians, German tank problem), Correlation lengths, Bose-Einstein condensation, and Jamming. See for yourself! --><br />
<br />
==Localization==<br />
The course will take place at the Physics Department of Ecole normale supérieure, 24 rue Lhomond, 75005 Paris.<br />
'''Lectures''': room '''L357/359''', third floor<br />
'''Tutorials''': room '''L357/359''', third floor<br />
<br />
==Prerequisites==<br />
This course will be self-contained. Some prior exposure to elementary statistical or thermal physics on the undergraduate level may be helpful.<br />
<br />
==Computing requirements==<br />
<br />
Probability, statistics, and statistical physics are today closely linked to computing. Students should be able to handle elementary Python programs. A number of such programs will be provided, and some will have to be modified for the homework sessions.</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ICFP_Stat_Physics_2019ICFP Stat Physics 20192019-08-31T22:12:22Z<p>Summary: </p>
<hr />
<div>This is the homepage for the ICFP course: Statistical Physics: Concepts and Applications that is running from 09 September 2019 through 13 January 2020. Lectures start at 8:30 on Monday mornings, and tutorials at 10:45.<br />
<br />
Lectures: Werner KRAUTH<br />
<br />
Tutorials (TD): Victor DAGARD, Valentina ROS<br />
<br />
Homeworks, factsheets: Botao LI<br />
<br />
[[ICFP Stat Physics 2019 infos| Look here for practical information]]<br />
<br />
<br />
Latest News:<br />
* 15 January 2020 Previous final exams available ([http://www.lps.ens.fr/%7Ekrauth/images/e/eb/ICFP_2015_Final.pdf 2015 Final];[http://www.lps.ens.fr/%7Ekrauth/images/3/31/ICFP_2016_Final.pdf 2016 Final]; [http://www.lps.ens.fr/%7Ekrauth/images/6/6b/ICFP_2017_Final.pdf 2017 Final]; [http://www.lps.ens.fr/%7Ekrauth/images/e/e9/ICFP_2018_Final.pdf 2018 Final]).<br />
* 15 January 2020: Fact sheet 08 (mean-field theory), Solutions of HW10, HW13 available<br />
* 02 January 2020: Solutions of TD08, TD10, TD11, and of HW08 available.<br />
* 30 December 2019: Here are lecture notes, that I am still continuing to work on all the time [http://www.lps.ens.fr/%7Ekrauth/images/7/7a/ICFP_StatMech1_75.pdf pages 1-75], [http://www.lps.ens.fr/%7Ekrauth/images/b/b2/ICFP_StatMech76_125.pdf pages 76-125], [http://www.lps.ens.fr/%7Ekrauth/images/4/48/ICFP_StatMech126_end.pdf pages 126-end].<br />
* THERE IS NO LECTURE ON 28 October 2019!! Next Lecture on 4 November 2019; [[ICFP_Stat_Physics_2019#Week_9_.2818_November_2019.29:_Landau_theory_.2F_Ginzburg_criterium_.28Transitions_and_order_parameters_2.2F2.29_.2F_Midterm_exam| Mid-term exam on 18 November 2019.]]<br />
* 24 October 2019: Previous Midterm exams uploaded: [http://www.lps.ens.fr/%7Ekrauth/images/6/61/MidTerm_2017_Solutions.pdf 2017] [http://www.lps.ens.fr/%7Ekrauth/images/1/15/MidTerm_2018.pdf 2018]<br />
* 24 October 2019: Solution of Homework 06 available, Python program for Lecture 07 (Duality)<br />
* 22 October 2019: Solution of Homework 05 available<br />
* 07 October 2019: [http://www.lps.ens.fr/%7Ekrauth/images/c/c8/Lecture05.pdf Lecture 5 (preliminary, without HW and TD)]<br />
* 30 September 2019: [http://www.lps.ens.fr/%7Ekrauth/images/1/16/Lectures01_02.pdf Lectures 1-3 (preliminary version)]<br />
* 29 September 2019: Factsheet 2a (Bootstrap method), Factsheet 2b (DKW inequality) available, solution HW01 available.<br />
* 09 September 2019: TD01 (with solution) and HW01 uploaded<br />
* 31 August 2019: Website set up<br />
<br />
==Week 1 (09 September 2019): Probability theory==<br />
<br />
* [http://www.lps.ens.fr/~krauth/images/5/5c/TD01_ICFP_2019_sol.pdf Tutorial 01: Characteristic functions / Stable distributions (with solutions)].<br />
* [http://www.lps.ens.fr/~krauth/images/b/b9/HW01_ICFP_2019.pdf Homework 01: Chebychev inequality / Rényi formula / Lévy distribution (with solutions)]. <br />
* [[renyi_problem_HW01_ICFP_2018.py| Python program: Renyi.py (for Homework 01)]].<br />
* [[levy_problem_HW01_ICFP_2018.py| Python program: Levy.py (for Homework 01)]].<br />
<br />
References for Week 1: <br />
* L. Wasserman, "All of Statistics, A Concise Course in Statistical Inference" (Springer, 2005) part 1.<br />
* W. Krauth, "[http://www.lps.ens.fr/~krauth/images/7/73/SMAC_1_3_4.pdf Statistical Mechanics: Algorithms and Computations]" (Oxford, 2006) Section 1.3.4 only - Error intervals from Chebychev inequality.<br />
<br />
==Week 2 (16 September 2019): Statistical inference==<br />
<br />
* [http://www.lps.ens.fr/~krauth/images/a/a8/TD02_ICFP_2019_sol.pdf Tutorial 02: Inference - from maximum likelihood to Bootstrap (with solutions)].<br />
* [http://www.lps.ens.fr/~krauth/images/a/a1/HW02_ICFP_2019.pdf Homework 02: Estimation, from maximum likelihood to Bayes (with solutions)].<br />
* [http://www.lps.ens.fr/~krauth/images/1/1a/BootstrapConfidenceInterval.pdf Fact sheet 2a: Bootstrap confidence interval].<br />
* [http://www.lps.ens.fr/~krauth/images/e/ed/FactSheet_DKW.pdf Fact sheet 2b: The Dvoretzky–Kieffer–Wolfowitz inequality or the incredible power of non-parametric statistics].<br />
* [[bayes_tank_problem_HW02_ICFP_2019.py| Python program: Bayes_tank.py (for Homework 02)]].<br />
<br />
References for Week 2: <br />
* L. Wasserman, "All of Statistics, A Concise Course in Statistical Inference" (Springer, 2005) part 2<br />
* W. Krauth, "[http://www.lps.ens.fr/~krauth/images/7/73/SMAC_1_3_4.pdf Statistical Mechanics: Algorithms and Computations]" (Oxford, 2006) Section 1.3.4 only<br />
* B. Efron, "Maximum likelihood and decision theory" Ann. Statist. 10, 340, 1982.<br />
* B. Efron, "[https://projecteuclid.org/euclid.aos/1176344552 Bootstrap methods: another look at the jackknife]" The Annals of Statistics, 1-26, 1979.<br />
* P. Diaconis and B. Efron, "[http://web.cecs.pdx.edu/~cgshirl/Documents/Demonstrations/1983%20Diaconis%20Efron%20Computer-Intensive%20Methods%20in%20Statistics%20Sci%20Am%20May%201983.pdf Computer intensive methods in statistics]" Scientific American 248, no. 5, pp. 116-130, 1983.<br />
<br />
==Week 3 (23 September 2019): Statistical mechanics and Thermodynamics==<br />
<br />
* [http://www.lps.ens.fr/~krauth/images/1/15/TD03_ICFP_2019.pdf Tutorial 03: Basic statistical mechanics - spin ice, pressure, etc (with solutions)].<br />
* [[Wald_Interval_Lecture3_ICFP_2019.py| Python program: Wald_Interval_Lecture3_ICFP_2019.py (for Lecture 03, see also Brown et al. (2001))]].<br />
* [[Chebychev_Hoeffding_Interval_Lecture3_ICFP_2019.py| Python program: Chebychev_Hoeffding_Interval_Lecture3_ICFP_2019.py (for Lecture 03, see also Brown et al. (2001))]].<br />
<br />
References for Week 3:<br />
* W. Krauth SMAC, pp 55-59<br />
* L. D. Brown, T. Tony Cai, A. DasGupta "Interval Estimation for a Binomial Proportion" Statistical Science 16, 101–133 (2001). This highly cited paper started a big discussion on the use and mis-use of error bars. <br />
* L. Wasserman "All of statistics", Section 6.3.2, p. 92 (on confidence sets).<br />
* D. A. Levin, Y. Peres "Markov Chains and Mixing Times, second edition" (discussion of the ergodic theorem)<br />
* Landau Lifshitz V, chapters 3,4,5 <br />
* K. Huang, "Statistical Mechanics 2nd edition" (1987) (Tutorial Problem 1).<br />
* L. Pauling, J. Am. Chem. Soc. 12 (2680-2684), 1935.(Tutorial Problem 2 on residual entropy of ice).<br />
* Bramwell, Gingras, Science 294, 1495 ( 2001) (Spin ice in pyrochlore).<br />
<br />
==Week 4 (30 September 2019): Phases and phase transitions: Van der Waals theory==<br />
<br />
* [http://www.lps.ens.fr/~krauth/images/6/66/TD04_ICFP_2019.pdf Tutorial 04: Clapeyron's equation, and first-order transitions in liquid crystals (with solutions)].<br />
* [http://www.lps.ens.fr/~krauth/images/3/3f/HW04_ICFP_2019.pdf Homework 04: Van-der-Waals theory and beyond (with solutions)].<br />
* [[Van_der_Waals.py| Python program: Van_der_Waals.py (for Homework 04)]].<br />
<br />
References for Week 4:<br />
* L. D. Landau, E. M. Lifshitz V, "Statistical Physics" (Pergamon, 1959, and later editions). NB: Chapter numbers and titles vary with edition. The following chapters all refer to the Lecture:<br />
** Chap 73 "Conditions of phase equilibrium"<br />
** Chap 79 "The critical point" (note that LL do not use the term "spinodal" for the points where dP/dV vanishes)<br />
** Chap 71 "Deviations of gases from the ideal state"<br />
** Chap 73 "Van der Waals' equation"<br />
** Chap 82 "The law of corresponding states"<br />
** Chap 152 (in some editions only) "Van der Waals theory of the critical point"<br />
** Chap 21 "Thermodynamic inequalities" (dP/dV < 0 is not strictly valid (!!) in finite systems - see homework)<br />
* R. A. Sauerwein, M. J. De Oliveira "Lattice model for biaxial and uniaxial nematic liquid crystals" J. of Chem. Phys. 144, 194904 (2016, Tutorial)<br />
* J. E. Mayer, W. W. Wood, "Interfacial Tension Effects in Finite, Periodic, Two-Dimensional Systems", Journal of Chemical Physics, 42, 4268 (1965, for the homework)<br />
<br />
==Week 5 (07 October 2019): Hard spheres and the Ising model in one dimension (Transfer matrix 1/2)==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/3/32/TD05_ICFP_2019.pdf Tutorial 05: One-dimensional classical models with phase transitions (with solutions)].<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/0/0b/HW05_ICFP_2019.pdf Homework 05: Transfer matrices (with solutions)].<br />
<br />
References for Week 5:<br />
* W. Krauth, "Statistical Mechanics: Algorithms and Computations" (Oxford, 2006) p. 269ff (hard-sphere partition function using the double substitution). <br />
* M Plischke, B Bergersen, "Equilibrium Statistical Physics" (World Scientific) p. 145f (some background material on the virial expansion), p. 77 ff (Ising chain, although our treatment was considerably different).<br />
* R. H. Swendsen, "Statistical mechanics of colloids and {Boltzmann's} definition of the entropy" American Journal of Physics 74, 187 (2006) (a good discussion of the Gibbs phenomenon)<br />
* D. J. Thouless, "Long-range order in one-dimensional Ising systems" Physical Review 187, 732 (1969) (Ingenious discussion of the 1/r^2 Ising model)<br />
* J. M. Kosterlitz, "Kosterlitz-Thouless physics: a review of key issues" Rep. Prog. Phys. 79 026001 (2016) (first two pages only, discussion and historical context for the Thouless paper. This is elementary to follow.).<br />
* C. Kittel, American Journal of Physics 37, 917 (1969) (First exercise of Tutorial 5)<br />
* J. A. Cuesta and A. Sanchez, J. Stat. Phys. 115, 869 (2004) (Third exercise of Tutorial 5, generalized Kittel model)<br />
<br />
==Week 6 (14 October 2019): Two-dimensional Ising model: From Ising to Onsager (Transfer matrix 2/2)==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/8/8d/TD06_ICFP_2019.pdf Tutorial 06: Peierls argument for spontaneous symmetry breaking in two and higher dimensions (with solutions)].<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/1/19/HW06_ICFP_2019.pdf Homework 06: Thouless (!) argument; transfer matrix for the two-dimensional Ising model on a stripe (with solutions)].<br />
* [http://www.lps.ens.fr/~krauth/images/2/2f/Transfer_2d_Ising.pdf Mathematica program: Transfer_2d_Ising.pdf (for Lecture 06 and Homework 06)].<br />
<br />
References for Week 6:<br />
* R. Peierls, Proceedings of the Cambridge Philosophical Society, 32, 477 (1936) (famous loop-counting argument establishing spontaneous symmetry breaking in the two-dimensional Ising model below a '''finite''' temperature)<br />
* C. Bonati, Eur. J. Phys. 35, 035002 (2014) (generalization of the Peierls argument to higher dimensions)<br />
* M Plischke, B Bergersen, "Equilibrium Statistical Physics" (World Scientific) section 6.1 (Transfer matrix for the two-dimensional Ising model, Onsager's solution)<br />
* T D Schultz, D C Mattis, E Lieb, "Two-dimensional Ising model as a soluble problem of many fermions" Reviews of Modern Physics (1964) (Authoritative account of Onsager's solution).<br />
<br />
==Week 7 (21 October 2019): Two-dimensional Ising model: From Kramers & Wannier to Kac & Ward (Low- and high-temperature expansions)==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/f/f8/TD07_ICFP_2019.pdf Tutorial 07: Thermodynamic quantities and high-temperature expansions (with solutions)].<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/4/4b/HW07_ICFP_2019.pdf Homework 07: Graphical solution for the two-dimensional Ising model (with solutions)].<br />
* [[Ising dual 4x4.py| Python program: Ising dual 4x4.py (for Lecture 07)]].<br />
* [[Combinatorial ising.py| Python program: Combinatorial ising.py (for Lecture 07)]].<br />
* [http://www.lps.ens.fr/~krauth/images/e/ee/U_2x2.pdf Mathematica program: U_2x2.pdf (for Homework 07)]. <br />
* [http://www.lps.ens.fr/%7Ekrauth/images/2/2a/U_4x4.pdf Mathematica program: U_4x4.pdf (for Homework 07)].<br />
<br />
References for Week 7:<br />
* W. Krauth, [http://www.lps.ens.fr/%7Ekrauth/images/0/01/Kramers_Wannier.pdf "Statistical Mechanics: Algorithms and Computations" (Oxford, 2006) section 5.1.3 (high-temperature expansion, following van der Waerden (1941)), and section 5.1.4 (Kac-Ward solution)).]<br />
* R. P. Feynman "Statistical Mechanics: A set of Lectures" (Benjamin/Cummings, 1972) (thorough discussion of Kramers-Wannier duality which yields the value of T_c, some discussion of the Kac-Ward solution).<br />
* M. Kac, J. C. Ward, "A combinatorial solution of the two-dimensional Ising model" Physical Review 185, 832 (1952) (NB: The paper contains the explicit diagonalization of the matrix U).<br />
* J. M. Yeomans, "Statistical Mechanics of Phase Transitions (Oxford, 1992), chapter 6 (for exercise 1 of tutorial 07).<br />
<br />
==Week 8 (04 November 2019): The three pillars of mean-field theory (Transitions and order parameters 1/2)==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/5/58/TD08_ICFP_2019.pdf Tutorial 08: Physics in infinite dimensions---Ising model on a Bethe lattice (with solutions)].<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/5/50/HW08_ICFP_2019.pdf Homework 08: Mean-field theory as easy as 1-2-3 (with solutions)]. <br />
* [[Mean_field_self_consistency_single_site.py| Python program: Mean_field_self_consistency_single_site.py (for Homework 08)]].<br />
* [[Mean field gen d Ising lattice.py| Python program: Mean field gen d Ising lattice.py (for Homework 08)]].<br />
* [[Ising mean field 1d.py| Python program: Ising mean field 1d.py (for Homework 08)]].<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/0/09/Mean_field.pdf Fact sheet 08: Mean field theory and saddle points]<br />
<br />
References for Week 8:<br />
* R. J. Baxter: "Exactly solved models in Statistical Mechanics" (1982) (Chapter 3, for the solution of the Ising model on a fully connected graph)<br />
* M. Plischke, B. Bergersen, "Equilibrium Statistical Physics" (World Scientific) section 3.1, pp 63 - 65 (Self-consistency à la Weiss, development for small m)<br />
* M. Plischke, B. Bergersen, "Equilibrium Statistical Physics" (World Scientific) section 3.1, pp 67 - 68 (Bragg-Williams theory)<br />
* C. M. Bender, S. A. Orszag, "Advanced Mathematical Methods for Scientists and Engineers: Asymptotic Methods and Perturbation Theory" (Springer, 1999) (difference equations)<br />
<br />
==Week 9 (18 November 2019): Landau theory / Ginzburg criterium (Transitions and order parameters 2/2) / Midterm exam==<br />
<br />
References for Week 9:<br />
* R. J. Baxter: "Exactly solved models in Statistical Mechanics" (1982) (Chapter 3: We expanded the free energy of the Ising model on a fully connected graph to motivate Landau theory)<br />
* J. Als‐Nielsen and R. J. Birgeneau: "Mean field theory, the Ginzburg criterion, and marginal dimensionality of phase transitions" Am. Journal of Physics 45, 554 (1977) (Elementary discussion of the Ginzburg criterion, although in the lecture we avoided the Fourier transform)<br />
* L. D. Landau, E. M. Lifshitz, "Statistical Physics", Chap 147 (Ginzburg criterion).<br />
<br />
==Week 10 (25 November 2019): Kosterlitz-Thouless physics in two dimensions: The XY model (Transitions without order parameters 1/2)==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/1/17/TD10_ICFP_2019.pdf Tutorial 10: The roughening transition (with solutions)].<br />
* [http://www.lps.ens.fr/~krauth/images/6/61/HW10_ICFP_2019.pdf Homework 10: Topological excitation and their interactions in the XY model (with solutions)].<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/c/cb/Wegner_XY_finite.pdf Factsheet 10: Wegner's solution of the d-dimensional harmonic model].<br />
* [[Wegner 1d Exact.py| Python program: Wegner 1d Exact.py (for Factsheet 10)]].<br />
* [[Wegner 1d Direct.py| Python program: Wegner 1d Direct.py (for Factsheet 10)]]. <br />
* [[Wegner 2d Exact.py| Python program: Wegner 2d Exact.py (for Factsheet 10)]].<br />
* [[Vortex pair.py| Python program: Vortex_pair.py (for Lecture 10)]].<br />
<br />
References for Week 10:<br />
* F. Wegner, "Spin-Ordering in a Planar Classical Heisenberg Model" Z. Phys 206, 465 (1967) (Exact solution of the harmonic approximation to the XY model, algebraic long-range correlations). See Factsheet 10.<br />
* J. M. Kosterlitz, D. M. Thouless "Ordering, Metastability and phase transitions in two-dimensional systems" J. Phys. C: Solid State Physics 6, 1181 (1973) (Nobel-prize winning paper, proposing topological excitations. For the free-energy argument for the XY model see p. 1190 ff). See homework.<br />
* J. Fröhlich, T. Spencer "The Kosterlitz-Thouless Transition in Two-Dimensional Abelian Spin Systems and the Coulomb Gas" Comm. Math. Phys. 81, 527 (1981) (Paper proving a low-temperature phase with algebraic correlations. Nuance: This paper proves the existence of a low-temperature phase but not the presence of a KT transitiont. The title thus overstates the content of the paper).<br />
* E. Domany, M. Schick, and R. H. Swendsen "First-Order Transition in an xy Model with Nearest-Neighbor Interactions Phys. Rev. Lett. 52, 1535 (1984) (Paper explaining the two-energy scales J (for a first-order transition) and J_R (for the KT transition). The XY model and its variant can have KT transitions or else first-order transitions.)<br />
* M. Hasenbusch, "The two-dimensional XY model at the transition temperature: a high-precision Monte Carlo study" J. Phys. A: Math. Gen. 38, 5869 (2005) (This is the final one of a long series of computational-physics papers that have established that the transition in the XY model is indeed of the Kosterlitz-Thouless type. It computes the critical temperature to 5 significant digits: β_KT = 1.1199).<br />
<br />
==Week 11 (02 December 2019): Kosterlitz-Thouless physics in two dimensions: KTHNY Melting theory (Transitions without order parameters 2/2)==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/3/38/TD11_ICFP_2019.pdf Tutorial 11: The harmonic solid (with solutions)].<br />
<br />
References for Week 11:<br />
* J. M. Kosterlitz, D. M. Thouless "Ordering, Metastability and phase transitions in two-dimensional systems" J. Phys. C: Solid State Physics 6, 1181 (1973) (First two pages: Motivation of KT theory <=> 2D melting. Origin of KT theory <=> dislocation theory of melting).<br />
* N. D. Mermin, "Crystalline Order in 2 Dimensions", Phys. Rev. 176, 250 (1968) (Discovery of the dissociation of positional and orientational ordering in the two-dimensional harmonic model: see eqs 32 & 33).<br />
* D. R. Nelson, B. I. Halperin, "Dislocation-mediated melting in two dimensions" Phys. Rev. B 19, 2457 (1979) (THE theory paper on the 2D melting theory, quite advanced). <br />
* A. P. Young "Melting and the vector Coulomb gas in two dimensions" Phys. Rev. B 19, 1855 (1979) (Vector nature of the dislocation-dislocation interaction, quite advanced).<br />
* D. R. Nelson, J. M. Kosterlitz, "Universal Jump in the Superfluid Density of Two-Dimensional Superfluids" Phys. Rev. Lett. 39, 1201 (1977) (We did not yet treat in class this most striking prediction of KT theory).<br />
<br />
==Week 12 (09 December 2019): The renormalization group - an introduction==<br />
<br />
References for Week 12:<br />
* H. J. Maris & L. P. Kadanoff "Teaching the renormalization group" American Journal of Physics 46, 652 (1978)<br />
* l. P. Kadanoff "Scaling Laws for Ising models near T_c" Physics 2, 263 (1966)<br />
* K. G. Wilson "The renormalization group: Critical phenomena and the Kondo problem" Reviews of Modern Physics 47, 773 (1975)<br />
* K. G. Wilson "The renormalization group and critical phenomena" Reviews of Modern Physics 55, 583 (1983) (Nobel lecture 1982)<br />
* P. J. Reynolds, H. E. Stanley and W. Klein "A Real-space renormalization group for site and bond percolation" J. of Phys. C, 10, L167 (1977) (Tutorial)<br />
* D. Stauffer, A. Aharony, "Introduction to Percolation Theory", 2nd rev. ed., Taylor & Francis, 2003 (Tutorial)<br />
<br />
==Week 13 (16 December 2019): Quantum statistics 1/2: Ideal Bosons==<br />
<br />
* [http://www.lps.ens.fr/%7Ekrauth/images/2/21/HW13_ICFP_2019.pdf Homework 13: Ideal bosons - density of states, enumerations, and integrations (with solutions)]<br />
<br />
<br />
References for Week 13:<br />
* J. A. Lipa et al, "Specific heat of liquid helium in zero gravity very near the lambda point", Phys. Rev. B 68, 174518 (2003) (Space-shuttle experiments)<br />
* M. Hasenbusch, "The three-dimensional XY universality class: a high precision Monte Carlo estimate of the universal amplitude ratio A +/ A −" J. Stat. Mech. (2006) P08019 (Interpretation of space-shuttle experiments in 3d XY model).<br />
* D. M. Ceperley, E. L. Pollock, "Path-integral computation of the low-temperature properties of liquid 4He" Phys. Rev. Lett. 56, 351 (1986) (First-principles numerical computation of the Lambda transition)<br />
* W. Krauth, "Statistical Mechanics: Algorithms and Computations" (2006) Chap 5.1: The two formulations of the model of ideal bosons<br />
<br />
==Week 14 (06 January 2020): Quantum statistics 2/2: 4He and the 3D Heisenberg model, Non-classical rotational inertia==<br />
<br />
References for Week 14:<br />
* G. B. Hess and W. M. Fairbank "Measurement of angular momentum in superfluid helium" Phys. Rev. Lett. 19, 216 (1967) (Non-classical response of a quantum fluid to rotation - A slowly rotating 4He vessel accelerates when cooled (!)).<br />
* W. Krauth "Statistical Mechanics: Algorithms and Computations" [http://www.lps.ens.fr/~krauth/images/7/74/SMAC_3_1_4.pdf Sect 3.1.4]. (Allows to understand non-classical rotational inertia by only considering an ideal quantum particles).<br />
* A. J. Leggett "Topics in the theory of helium" Physica Fennica 8, 125 (1973) (Fundamental paper which explains Non-classical rotational inertia very similarly to how we proceeded in the lecture.)<br />
<br />
==Week 15 (13 January 2020): The Fluctuation-Dissipation theorem (an introduction)==<br />
<br />
References for Week 15:<br />
* R. Kubo "The fluctuation-dissipation theorem" Reports on Progress in Physics, 29, 255 (1966). This is a fundamental text, of which we treat the first 10 pages, or so, in the lecture.<br />
* H. Risken "The Fokker-Planck equation (Springer Verlag, 1996).<br />
<br />
==References==<br />
Rudimentary Lecture notes are available (see above). A few essential references are given each week (see above, also). ICFP students can access these references from within the Department (you may for example connect to Web of Science, and download them from there). You may also ask the library staff at 29 rue d'Ulm.<br />
<br />
==Books==<br />
* L. Wasserman, "All of Statistics, A Concise Course in Statistical Inference" (Springer, 2005)<br />
* W. Krauth, "Statistical Mechanics: Algorithms and Computations" (Oxford, 2006)<br />
* M Plischke, B Bergersen, "Equilibrium Statistical Physics" (World Scientific, 2006)<br />
* L. D. Landau, E. M. Lifshitz, "Statistical Physics" (Pergamon, 1969)<br />
<br />
[[Category:ICFP Lectures]]</div>Wernerhttp://www.lps.ens.fr/~krauth/index.php/ICFP_Projets_bibliographiques_2019ICFP Projets bibliographiques 20192019-08-31T18:08:05Z<p>Summary: </p>
<hr />
<div>This is the homepage for the '''ICFP Projets bibliographiques''' (library-based project) for the fall/winter semester 2019 in the ICFP master at ENS.<br />
<br />
<br />
Responsable: Werner KRAUTH [mailto:werner.krauth@ens.fr mail]<br />
<br />
'''Latest updates:'''<br />
* 10-13 September 2019: Individual meetings scheduled for week of 16 Sept. Please contact me if you're left out. <br />
* 31 August 2019: Website set up.<br />
<br />
==Idea of the library-based projects at ENS==<br />
<br />
* Seek out, understand, classify relevant scientific literature on a chosen topic.<br />
* Enter in contact with an exciting research topic.<br />
* Enter in contact with a junior or senior professor/researcher at ENS or another accredited institution in the Greater Paris area, see her/him repeatedly.<br />
* Present the topic both in written form and orally in a scientifically sound and pedagogical manner.<br />
* Work in a curiosity-driven mode.<br />
* Learn essential skills.<br />
* If successful: Gain 6 ECTS.<br />
<br />
==Organization==<br />
<br />
* YOU [mailto:werner.krauth@ens.fr send mail to ME] declaring that you're interested. Include in your mail what you are generally interested in, etc.<br />
* I will organize exchange and individual meetings with any interested student.<br />
* I may also organize a meeting with the ENS library staff.<br />
* YOU find a possible supervisor, and an interesting project (Come see me or other Professors if you need suggestions). This process should be finished by Oct. 5, 2019.<br />
* YOU obtain written consent by ICFP staff (me), by Oct. 12, 2019 that this choice is OK.<br />
* YOU organize regular meetings or other exchange with your advisor and with me, if necessary.<br />
* YOU Immediately start reading.<br />
* YOU send intermediate half-page report by 16 Nov 2019.<br />
* YOU send in the final report by 08 January 2020 midnight.<br />
* YOU will defend your project on 21 / 22 / 23 Jan 2020.<br />
<br />
==DO's and DON'Ts==<br />
<br />
* A library-based project is (usually) about reading a few original research papers. Usually, you will have to branch out to other papers, books, etc. Usually, you will discuss with your supervisor, other professors, senior students, etc.<br />
* Original research papers can have been written anytime in the 19th, 20th or 21st century.<br />
* Some projects ask you to read and understand papers from different periods, and using different approaches (theoretical, experimental, observational, computational...).<br />
* A library-based project is normally not about a chapter or so from a text book.<br />
* A library-based project needs you to connect with your supervisor and (normally) with the ICFP staff (me), on a regular basis.<br />
* A library-based project is all about overview/context and your ability to synthesize: Besides the papers there is what professors and researchers, and especially your supervisor, tell you about them.<br />
* A library-based project is difficult, yet very exciting.<br />
* A library-based project can be the source of much frustration, and of great satisfaction.<br />
* A library-based project, just as your other courses, may influence your career choices.<br />
* A library-based project is not a programming assignment (although you might write a little code to understand things).<br />
* A library-based project is certainly not an internship (stage). This means that, for example, you should not spend three months understanding a single equation, or a single page of an article that are too difficult for you. Try to understand what the equation means: Ask your supervisor. Go beyond the equation! Understand the context!<br />
* A library-based project is not journalism: You need to understand the scientific questions quite in detail.<br />
* Plagiarism will not be tolerated, and you cannot do your library-based project with a member of your (extended) family.<br />
<br />
==Planning (provisional)==<br />
* Sep 3, 2019 - Présentation of the project by Kris van Houcke<br />
* Sep 3 - 13, 2019 - Message of intent (YOU to ME)<br />
* Sep 13 - October 5, 2019 - Mail exchange/Meetings with ICFP staff/ potential supervisors (YOU and ME)<br />
* Oct 5, 2019 - Project all set up (mail to ME, hard deadline)<br />
* Oct 8, 2019 - Acceptance/Rejection of project (ME to YOU) <br />
* Nov 16, 2019 - One-page intermediate report (YOU to ME)<br />
* Jan 8, 2020 (midnight) - Due date of report (YOU to ME) - hard deadline<br />
* Jan 21 / 22 / 23 January 2020 - Oral defense ("soutenance") of the project (YOU to JURY)<br />
<br />
==Report / Defense (soutenance)==<br />
<br />
* The report is limited to 15 pages (including references), in correct English, and in scientific style. You should have it re-read by others.<br />
* Additionally, you will provide a signed anti-plagiarism declaration. This document will be provided in due time. By this you certify that the report is entirely written by yourself, and that you cited all your sources. <br />
* The report must contain an introduction, a development, and a conclusion/outlook.<br />
* The oral presentation lasts 15 min (in English), and it must be comprehensible to ICFP Prof (me) and to the outside expert.<br />
* There are 10 min of questions. Anything you have written or said is open to scrutiny.<br />
* Besides the content of the report and the presentation, the jury may ask questions of context.</div>Werner