Ising model

aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
essay  →
feed  →
help  →
system  →
wiki  →
critical  →
discussion  →
forked  →
imported  →
original  →
Ising model
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Statistical mechanics|cTopic=Models}}The Ising model ({{IPAc-en|ˈ|aɪ|s|ɪ|ŋ}}; {{IPA-de|ˈiːzɪŋ|lang}}), named after the physicist Ernst Ising, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. The model allows the identification of phase transitions, as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.See {{harvtxt|Gallavotti|1999}}, Chapters VI-VII.The Ising model was invented by the physicist {{harvs|txt|authorlink=Wilhelm Lenz|first=Wilhelm|last=Lenz|year=1920}}, who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model has no phase transition and was solved by {{harvtxt|Ising|1925}} himself in his 1924 thesis.Ernst Ising, Contribution to the Theory of Ferromagnetism The two-dimensional square lattice Ising model is much harder, and was given an analytic description much later, by {{harvs|txt|authorlink=Lars Onsager|first=Lars |last=Onsager|year=1944}}. It is usually solved by a transfer-matrix method, although there exist different approaches, more related to quantum field theory.In dimensions greater than four, the phase transition of the Ising model is described by mean field theory.


Consider a set Λ of lattice sites, each with a set of adjacent sites (e.g. a graph) forming a d-dimensional lattice. For each lattice site k âˆˆ Î› there is a discrete variable σk such that σk  âˆˆ {+1, âˆ’1}, representing the site's spin. A spin configuration, σ = (σk)k âˆˆ Î› is an assignment of spin value to each lattice site.For any two adjacent sites ij âˆˆ Λ there is an interaction J'ij. Also a site j âˆˆ Î› has an external magnetic field h'j interacting with it. The energy of a configuration σ is given by the Hamiltonian function
H(sigma) = - sum_{langle i~jrangle} J_{ij} sigma_i sigma_j -mu sum_{j} h_jsigma_j
where the first sum is over pairs of adjacent spins (every pair is counted once). The notation ⟨ij⟩ indicates that sites i and j are nearest neighbors. The magnetic moment is given by µ. Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally.See {{harvtxt|Baierlein|1999}}, Chapter 16. The configuration probability is given by the Boltzmann distribution with inverse temperature β ≥ 0:
P_beta(sigma) ={e^{-beta H(sigma)} over Z_beta},
where β = (kBT)−1 and the normalization constant
Z_beta = sum_sigma e^{-beta H(sigma)}
is the partition function. For a function f of the spins ("observable"), one denotes by
langle f rangle_beta = sum_sigma f(sigma) P_beta(sigma) ,
the expectation (mean) value of f.The configuration probabilities Pβ(σ) represent the probability that (in equilibrium) the system is in a state with configuration σ.


The minus sign on each term of the Hamiltonian function H(σ) is conventional. Using this sign convention, the Ising models can be classified according to the sign of the interaction: if, for all pairs i,j
J_{ij} > 0, the interaction is called ferromagnetic J_{ij} < 0, the interaction is called antiferromagnetic J_{ij} = 0, the spins are noninteracting
otherwise the system is called nonferromagnetic.In a ferromagnetic Ising model, spins desire to be aligned: the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs.The sign convention of H(σ) also explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field. If:
h_j>0, the spin site j desires to line up in the positive direction h_j 0.,
This was first proven by Rudolf Peierls in 1936,JOURNAL, 10.1017/S0305004100019174, On Ising's model of ferromagnetism, Mathematical Proceedings of the Cambridge Philosophical Society, 32, 3, 477, 1936, Peierls, R., Born, M., using what is now called a Peierls argument.The Ising model on a two-dimensional square lattice with no magnetic field was analytically solved by {{harvs|txt|authorlink=Lars Onsager|first=Lars |last=Onsager|year=1944}}. Onsager showed that the correlation functions and free energy of the Ising model are determined by a noninteracting lattice fermion. Onsager announced the formula for the spontaneous magnetization for the 2-dimensional model in 1949 but did not give a derivation. {{harvtxt|Yang|1952}} gave the first published proof of this formula, using a limit formula for Fredholm determinants, proved in 1951 by Szegő in direct response to Onsager's work.{{harvnb|Montroll|Potts|Ward|1963|pages=308–309}}

Historical significance

One of Democritus' arguments in support of atomism was that atoms naturally explain the sharp phase boundaries observed in materials{{citation needed|date=July 2014}}, as when ice melts to water or water turns to steam. His idea was that small changes in atomic-scale properties would lead to big changes in the aggregate behavior. Others believed that matter is inherently continuous, not atomic, and that the large-scale properties of matter are not reducible to basic atomic properties.While the laws of chemical binding made it clear to nineteenth century chemists that atoms were real, among physicists the debate continued well into the early twentieth century. Atomists, notably James Clerk Maxwell and Ludwig Boltzmann, applied Hamilton's formulation of Newton's laws to large systems, and found that the statistical behavior of the atoms correctly describes room temperature gases. But classical statistical mechanics did not account for all of the properties of liquids and solids, nor of gases at low temperature.Once modern quantum mechanics was formulated, atomism was no longer in conflict with experiment, but this did not lead to a universal acceptance of statistical mechanics, which went beyond atomism. Josiah Willard Gibbs had given a complete formalism to reproduce the laws of thermodynamics from the laws of mechanics. But many faulty arguments survived from the 19th century, when statistical mechanics was considered dubious. The lapses in intuition mostly stemmed from the fact that the limit of an infinite statistical system has many zero-one laws which are absent in finite systems: an infinitesimal change in a parameter can lead to big differences in the overall, aggregate behavior, as Democritus expected.

No phase transitions in finite volume

In the early part of the twentieth century, some believed that the partition function could never describe a phase transition, based on the following argument:
  1. The partition function is a sum of e−βE over all configurations.
  2. The exponential function is everywhere analytic as a function of β.
  3. The sum of analytic functions is an analytic function.
This argument works for a finite sum of exponentials, and correctly establishes that there are no singularities in the free energy of a system of a finite size. For systems which are in the thermodynamic limit (that is, for infinite systems) the infinite sum can lead to singularities. The convergence to the thermodynamic limit is fast, so that the phase behavior is apparent already on a relatively small lattice, even though the singularities are smoothed out by the system's finite size.This was first established by Rudolf Peierls in the Ising model.

Peierls droplets

Shortly after Lenz and Ising constructed the Ising model, Peierls was able to explicitly show that a phase transition occurs in two dimensions.To do this, he compared the high-temperature and low temperature limits. At infinite temperature, β = 0, all configurations have equal probability. Each spin is completely independent of any other, and if typical configurations at infinite temperature are plotted so that plus/minus are represented by black and white, they look like television snow. For high, but not infinite temperature, there are small correlations between neighboring positions, the snow tends to clump a little bit, but the screen stays random looking, and there is no net excess of black or white.A quantitative measure of the excess is the magnetization, which is the average value of the spin:
M= {1over N} sum_{i=1}^{N} sigma_i.
A bogus argument analogous to the argument in the last section now establishes that the magnetization in the Ising model is always zero.
  1. Every configuration of spins has equal energy to the configuration with all spins flipped.
  2. So for every configuration with magnetization M there is a configuration with magnetization −M with equal probability.
  3. The system should therefore spend equal amounts of time in the configuration with magnetization M as with magnetization −M.
  4. So the average magnetization (over all time) is zero.
As before, this only proves that the average magnetization is zero at any finite volume. For an infinite system, fluctuations might not be able to push the system from a mostly-plus state to a mostly minus with a nonzero probability.For very high temperatures, the magnetization is zero, as it is at infinite temperature. To see this, note that if spin A has only a small correlation ε with spin B, and B is only weakly correlated with C, but C is otherwise independent of A, the amount of correlation of A and C goes like ε2. For two spins separated by distance L, the amount of correlation goes as εL but if there is more than one path by which the correlations can travel, this amount is enhanced by the number of paths.The number of paths of length L on a square lattice in d dimensions is
N(L) = (2d)^L
since there are 2d choices for where to go at each step.A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all paths of length L divided by
sum_L (2d)^L varepsilon^L
which goes to zero when ε is small.At low temperatures, β ≫ 1, the configurations are near the lowest energy configuration, the one where all the spins are plus or all the spins are minus. Peierls asked whether it is statistically possible at low temperature, starting with all the spins minus, to fluctuate to a state where most of the spins are plus. For this to happen, droplets of plus spin must be able to congeal to make the plus state.The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter L, the area is somewhere between (L âˆ’ 2)/2 (the straight line) and (L/4)2 (the square box). The probability cost for introducing a droplet has the factor e−βL, but this contributes to the partition function multiplied by the total number of droplets with perimeter L, which is less than the total number of paths of length L:
N(L)< 4^{2L}.
So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by
sum_L L^2 4^{-2L} e^{-4beta L}
which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from âˆ’1.So Peierls established that the magnetization in the Ising model eventually defines superselection sectors, separated domains which are not linked by finite fluctuations.

Kramers–Wannier duality

Kramers and Wannier were able to show that the high temperature expansion and the low temperature expansion of the model are equal up to an overall rescaling of the free energy. This allowed the phase transition point in the two-dimensional model to be determined exactly (under the assumption that there is a unique critical point).

Yang–Lee zeros

After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature.

Monte Carlo methods for numerical simulation

(File:Ising quench b10.gif|framed|right|Quench of an Ising system on a two dimensional square lattice (500x500) with inverse temperature β = 10 starting from a random configuration.)


The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising model with
  • L = |Λ|: the total number of sites on the lattice,
  • σj ∈ {−1, +1}: an individual spin site on the lattice, j = 1, ..., L,
  • S ∈ {−1, +1}L: state of the system.
Since every spin site has ±1 spin, there are 2L different states that are possible.Newman MEJ, Barkema GT, "Monte Carlo Methods in Statistical Physics, Clarendon Press, 1999 This motivates the reason for the Ising model to be simulated using Monte Carlo methods.The Hamiltonian that is commonly used to represent the energy of the model when using Monte Carlo methods is:
H(sigma) = - Jsum_{langle i~jrangle}sigma_i sigma_j -hsum_{j}sigma_j.
Furthermore, the Hamiltonian is further simplified by assuming zero external field (h) since many questions that are posed to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ:
H(sigma) = - Jsum_{langle i~jrangle}sigma_i sigma_j.
Given this Hamiltonian, quantities of interest such as the specific heat or the magnetization of the magnet at a given temperature can be calculated.

Metropolis algorithm


The Metropolis–Hastings algorithm is the most commonly used Monte Carlo algorithm to calculate Ising model estimations. The algorithm first chooses selection probabilities g(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that one is in state μ. It then uses acceptance probabilities A(μ, ν) so that detailed balance is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criterion is met, which for the Ising model is often when the lattice becomes ferromagnetic, meaning all of the sites point in the same direction.When implementing the algorithm, one must ensure that g(μ, ν) is selected such that ergodicity is met. In thermal equilibrium a system's energy only fluctuates within a small range. This is the motivation behind the concept of single-spin-flip dynamics, which states that in each transition, we will only change one of the spin sites on the lattice. Furthermore, by using single- spin-flip dynamics, one can get from any state to any other state by flipping each site that differs between the two states one at a time.The maximum amount of change between the energy of the present state, Hμ and any possible new state's energy Hν (using single-spin-flip dynamics) is 2J between the spin we choose to "flip" to move to the new state and that spin's neighbor. Thus, in a 1D-Ising model, where each site has two neighbors (left, and right), the maximum difference in energy would be 4J.Let c represent the lattice coordination number; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to periodic boundary conditions. It is important to note that the Metropolis–Hastings algorithm does not perform well around the critical point due to critical slowing down. Other techniques such as multigrid methods, Niedermayer's algorithm, Swendsen–Wang algorithm, or the Wolff algorithm are required in order to resolve the model near the critical point; a requirement for determining the critical exponents of the system.


Specifically for the Ising model and using single-spin-flip dynamics, one can establish the following.Since there are L total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of L new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the L states: g(μ, ν) = 1/L. Detailed balance tells us that the following equation must hold:
frac{P(mu,nu)}{P(nu,mu)} = frac{g(mu,nu)A(mu,nu)}{g(nu,mu)A(nu,mu)} = frac{A(mu,nu)}{A(nu,mu)} = frac{P_beta(nu)}{P_beta(mu)} = frac{frac{1}{Z}e^{-beta(H_nu)}}{frac{1}{Z}e^{-beta(H_mu)}} = e^{-beta(H_nu-H_mu)}.
Thus, we want to select the acceptance probability for our algorithm to satisfy:
If Hν > Hμ then A(ν, μ) > A(μ, ν) Metropolis sets the larger of A(μ, ν) or A(ν, μ) to be 1. By this reasoning the acceptance algorithm is:
e^{-beta(H_nu-H_mu)}, & text{if }H_nu-H_mu>0
1, & text{otherwise}.
end{cases}The basic form of the algorithm is as follows:
  1. Pick a spin site using selection probability g(μ, ν) and calculate the contribution to the energy involving this spin.
  2. Flip the value of the spin and calculate the new contribution.
  3. If the new energy is less, keep the flipped value.
  4. If the new energy is more, only keep with probability e^{-beta(H_nu-H_mu)}.
  5. Repeat.
The change in energy Hν−Hμ only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution.

Viewing the Ising model as a Markov chain

It is possible to view the Ising model as a Markov chain, as the immediate probability Pβ(ν) of transitioning to a future state ν only depends on the present state μ. The Metropolis algorithm is actually a version of a Markov chain Monte Carlo simulation, and since we use single-spin-flip dynamics in the Metropolis algorithm, every state can be viewed as having links to exactly L other states, where each transition corresponds to flipping a single spin site to the opposite value.JOURNAL, Teif V.B., General transfer matrix formalism to calculate DNA-protein-drug binding in gene regulation, Nucleic Acids Res., 2007, 35, 11, e80, 10.1093/nar/gkm268, 17526526, 1920246, Furthermore, since the energy equation Hσ change only depends on the nearest-neighbor interaction strength J, the Ising model and its variants such the Sznajd model can be seen as a form of a voter model for opinion dynamics.

One dimension

The thermodynamic limit exists as soon as the interaction decay is J_{ij} sim |i-j|^{-alpha} with α > 1.BOOK, Ruelle, Statistical Mechanics:Rigorous Results., W.A. Benjamin Inc., 1969, New York,
  • In the case of ferromagnetic interaction J_{ij} sim |i-j|^{-alpha} with 1< α < 2 Dyson proved, by comparison with the hierarchical case, that there is phase transition at small enough temperature.JOURNAL, Dyson, F.J., Existence of a phase-transition in a one-dimensional Ising ferromagnet, Comm. Math. Phys., 1969, 12, 2, 91–107, 10.1007/BF01645907, 1969CMaPh..12...91D,
  • In the case of ferromagnetic interaction J_{ij} sim |i-j|^{-2} , Fröhlich and Spencer proved that there is phase transition at small enough temperature (in contrast with the hierarchical case).JOURNAL, Fröhlich, J., Spencer, T., The phase transition in the one-dimensional Ising model with 1/r 2 interaction energy., Comm. Math. Phys., 1982, 84, 1, 10.1007/BF01208373, 87–101, 1982CMaPh..84...87F,
  • In the case of interaction J_{ij} sim |i-j|^{-alpha} with α > 2 (that includes the case of finite range interactions) there is no phase transition at any positive temperature (i.e. finite β) since the free energy is analytic in the thermodynamic parameters.
  • In the case of nearest neighbor interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters and the truncated two-point spin correlation decays exponentially fast. At zero temperature, (i.e. infinite β), there is a second order phase transition: the free energy is infinite and the truncated two point spin correlation does not decay (remains constant). Therefore, T = 0 is the critical temperature of this case. Scaling formulas are satisfied.{{citation | last1=Baxter | first1=Rodney J. | title=Exactly solved models in statistical mechanics | url= | publisher=Academic Press Inc. [Harcourt Brace Jovanovich Publishers] | location=London | isbn=978-0-12-083180-7 | mr=690578 | year=1982}}

Ising's exact solution

In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The energy of the one-dimensional Ising model on a lattice of L sites with periodic boundary conditions is
H(sigma)=-Jsum_{i=1,ldots,L-1} sigma_i sigma_{i+1} - h sum_i sigma_i
where J and h can be any number, since in this simplified case J is a constant representing the interaction strength between the nearest neighbors and h is the constant external magnetic field applied to lattice sites. Then thefree energy is
f(beta, h)=-lim_{Lto infty} frac{1}{beta L} ln (Z(beta))=-frac{1}{beta} lnleft(e^{beta J} cosh beta h+sqrt{e^{2beta J}(sinhbeta h)^2+e^{-2beta J}}right)
and the spin-spin correlation is
langle sigma_i sigma_jrangle-langle sigma_i ranglelanglesigma_jrangle=C(beta)e^{-c(beta)|i-j|}
where C(β) and c(β) are positive functions for T > 0. For T → 0, though, the inverse correlation length, c(β), vanishes.


The proof of this result is a simple computation.If h = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when
Then the model factorizes under the change of variables
sigma'_j=sigma_jsigma_{j-1} qquad jge 2.
That gives
Z(beta) =sum_{sigma_1,ldots, sigma_L} e^{beta Jsigma_1sigma_2}; e^{beta Jsigma_2sigma_3}; cdots e^{beta Jsigma_{L-1}sigma_L}= 2prod_{j=2}^L sum_{sigma'_j} e^{beta Jsigma'_j} =2left[ e^{beta J}+e^{-beta J}right]^{L-1}.
Therefore, the free energy is
f(beta,0)=-frac{1}{beta } lnleft[e^{beta J}+ e^{-beta J}right].
With the same change of variables
langle sigma_{j}sigma_{j+N}rangle=left[frac{e^{beta J}- e^{-beta J}}{e^{beta J}+ e^{-beta J}}right]^N
hence it decays exponentially as soon as T ≠ 0; but for T = 0, i.e. in the limit β → ∞ there is no decay.If h ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is
Z(beta)=sum_{sigma_1,ldots, sigma_L} e^{beta h sigma_1}e^{beta Jsigma_1sigma_2}; e^{beta h sigma_2}e^{beta Jsigma_2sigma_3}; cdots e^{beta h sigma_L}e^{beta Jsigma_Lsigma_1}= sum_{sigma_1,ldots, sigma_L} V_{sigma_1,sigma_2}V_{sigma_2,sigma_3}cdots V_{sigma_L,sigma_1}.
The coefficients V_{sigma, sigma'}'s can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is
V_{sigma, sigma'} = e^{frac{beta h}{2} sigma} e^{beta Jsigmasigma'} e^{frac{beta h}{2} sigma'}
V=begin{bmatrix} e^{beta(h+J)}&e^{-beta J} e^{-beta J}&e^{-beta(h-J)} end{bmatrix}.
In matrix formalism
Z(beta)={rm Tr} left(V^Lright)= lambda_1^L + lambda_2^L= lambda_1^Lleft[1+ left(frac{lambda_2}{lambda_1}right)^Lright]
where λ1 is the highest eigenvalue of V, while λ2 is the other eigenvalue:
lambda_1=e^{beta J} cosh beta h+ sqrt{e^{2beta J} (sinhbeta h)^2 +e^{-2beta J}}
and |λ2| < λ1. This gives the formula of the free energy.


The energy of the lowest state is −JL, when all the spins are the same. For any other configuration, the extra energy is equal to 2J times the number of sign changes that are encountered when scanning the configuration from left to right.If we designate the number of sign changes in a configuration as k, the difference in energy from the lowest energy state is 2k. Since the energy is additive in the number of flips, the probability p of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor:
{p over 1-p} = e^{-2beta J}.
The problem is reduced to independent biased coin tosses. This essentially completes the mathematical description.From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to p, so the correlations fall off exponentially.
langle S_i S_j rangle ,propto, e^{-p|i-j|}.
The partition function is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the partition function factorizes:
Z = sum_{mathrm{configs}} e^{sum_k S_k} = prod_k (1 + p ) = (1+p)^L.
The logarithm divided by L is the free energy density:
beta f = log(1+p) = logleft( 1 + {e^{-2beta J}over 1+e^{-2beta J}} right),
which is analytic away from β = ∞. A sign of a phase transition is a non-analytic free energy, so the one-dimensional model does not have a phase transition.

One-dimensional solution with transverse field

To express the Ising Hamiltonian using a quantum mechanical description of spins, we replace the spin variables with their respective Pauli matrices. However, depending on the direction of the magnetic field, we can create a transverse-field or longitudinal-field Hamiltonian. The transverse-field Hamiltonian is given by
H(sigma)=-Jsum_{i=1,ldots,L} sigma_i^z sigma_{i+1}^z - h sum_i sigma_i^x.
The transverse-field model experiences a phase transition between an ordered and disordered regime at J~h. This can be shown by a mapping of Pauli matrices
sigma_n^z = prod_{i=1}^nT_i^x,
sigma_n^x = T_n^zT_{n+1}^z.
Upon rewriting the Hamiltonian in terms of this change-of-basis matrices, we obtain
H(sigma)=-hsum_{i=1,ldots,L} T_i^z T_{i+1}^z - Jsum_i T_i^x.
Since the roles of h and J are switched, the Hamiltonian undergoes a transition at J = h.BOOK, Suzuki, Sei, Inoue, Jun-ichi, Chakrabarti, Bikas K., Quantum Ising Phases and Transitions in Transverse Ising Models, Springer, 2012, 10.1007/978-3-642-33039-1, 978-3-642-33038-4,weblink

Two dimensions

  • In the ferromagnetic case there is a phase transition. At low temperature, the Peierls argument proves positive magnetization for the nearest neighbor case and then, by the Griffiths inequality, also when longer range interactions are added. Meanwhile, at high temperature, the cluster expansion gives analyticity of the thermodynamic functions.
  • In the nearest-neighbor case, the free energy was exactly computed by Onsager, through the equivalence of the model with free fermions on lattice. The spin-spin correlation functions were computed by McCoy and Wu.

Onsager's exact solution

{{harvtxt|Onsager|1944}} obtained the following analytical expression for the free energy of the Ising model on the anisotropic square lattice when the magnetic field h=0 in the thermodynamic limit as a function of temperature and the horizontal and vertical interaction energies J_1 and J_2, respectively
-beta f = ln 2 + frac{1}{8pi^2}int_0^{2pi}dtheta_1int_0^{2pi}dtheta_2 ln[cosh(2beta J_1)cosh(2beta J_2) -sinh(2beta J_1)cos(theta_1)-sinh(2beta J_2)cos(theta_2)].
From this expression for the free energy, all thermodynamic functions of the model can be calculated by using an appropriate derivative. The 2D Ising model was the first model to exhibit a continuous phase transition at a positive temperature. It occurs at the temperature T_c which solves the equation
sinhleft(frac{2J_1}{kT_c}right)sinhleft(frac{2J_2}{kT_c}right) = 1.
In the isotropic case when the horizontal and vertical interaction energies are equal J_1=J_2=J, the critical temperature T_c occurs at the following point
T_c = frac{2J}{kln(1+sqrt{2})}
When the interaction energies J_1, J_2 are both negative, the Ising model becomes an antiferromagnet. Since the square lattice is bi-partite, it is invariant under this change when the magnetic field h=0, so the free energy and critical temperature are the same for the antiferromagnetic case. For the triangular lattice, which is not bi-partite, the ferromagnetic and antiferromagnetic Ising model behave notably differently.

Transfer matrix

Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function
sum_S expbiggl(sum_{ij} S_{i,j} S_{i,j+1} + S_{i,j} S_{i+1,j}biggr).
Think of the i direction as space, and the j direction as time. This is an independent sum over all the values that the spins can take at each time slice. This is a type of path integral, it is the sum over all spin histories.A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time t and time t + Δt:
U = e^{i H Delta t}
The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with.
U^N = (e^{i H Delta t})^N = int DX e^{iL}
where N is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next.Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the transfer matrix:
T_{C_1 C_2}.
The configuration in each slice is a one-dimensional collection of spins. At each time slice, T has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are C1 and C2, and they are all one-dimensional spin configurations. We can think of the vector space that T acts on as all complex linear combinations of these. Using quantum mechanical notation:
|Arangle = sum_S A(S) |Srangle
where each basis vector |Srangle is a spin configuration of a one-dimensional Ising model.Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the sum over all histories which come back to the original configuration after N steps:
Z= mathrm{tr}(T^N).
Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix T, we can find Z.

T in terms of Pauli matrices

The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i:
In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped.Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position i:
T can be written in terms of these:
sum_i A sigma^x_i + B sigma^z_i sigma^z_{i+1}
where A and B are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position ''i' has flipped.

Spin flip creation and annihilation operators

Just as in the one-dimensional case, we will shift attention from the spins to the spin-flips. The σz term in T counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators:
sum C psi^dagger_i psi_i. ,
The first term flips a spin, so depending on the basis state it either:
  1. moves a spin-flip one unit to the right
  2. moves a spin-flip one unit to the left
  3. produces two spin-flips on neighboring sites
  4. destroys two spin-flips on neighboring sites.
Writing this out in terms of creation and annihilation operators:
sigma^x_i = D {psi^dagger}_i psi_{i+1} + D^* {psi^dagger}_i psi_{i-1} + Cpsi_i psi_{i+1} + C^* {psi^dagger}_i {psi^dagger}_{i+1}.
Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the T matrix can be diagonalized by Fourier transforms.Carrying out the diagonalization produces the Onsager free energy.

Onsager's formula for spontaneous magnetization

Onsager famously announced the following expression for the spontaneous magnetization M of a two-dimensional Ising ferromagnet on the square lattice at two different conferences in 1948, though without proof
M = left(1 - left[sinh 2beta J_1 sinh 2beta J_2right]^{-2}right)^{frac{1}{8}}
where J_1 and J_2 are horizontal and vertical interaction energies.A complete derivation was only given in 1951 by {{harvtxt|Yang|1952}} using a limiting process of transfer matrix eigenvalues. The proof was subsequently greatly simplified in 1963 by Montroll, Potts, and Ward using Szegő's limit formula for Toeplitz determinants by treating the magnetization as the limit of correlation functions.

Three and four dimensions

In three dimensions, the Ising model was shown to have a representation in terms of non-interacting fermionic lattice strings by Alexander Polyakov. The critical point of the three-dimensional Ising model is described by a conformal field theory, as evidenced by Monte Carlo simulationsJOURNAL, 10.1007/JHEP07(2013)055, 1307, 7, 055, Billó, M., Caselle, M., Gaiotto, D., Gliozzi, F., Meineri, M., others, Line defects in the 3d Ising model, JHEP, 2013, 2013JHEP...07..055B, 1304.4110, JOURNAL, Cosme, Catarina, Lopes, J. M. Viana Parente, Penedones, Joao, Conformal symmetry of the critical 3D Ising model inside a sphere, Journal of High Energy Physics, 2015, 8, 22, 2015, 1503.02011, 10.1007/JHEP08(2015)022, 2015JHEP...08..022C, and theoretical arguments.JOURNAL, Delamotte, Bertrand, Tissier, Matthieu, Wschebor, Nicolás, Scale invariance implies conformal invariance for the three-dimensional Ising model, Physical Review E, 93, 12144, 012144, 2016, 1501.01776, 10.1103/PhysRevE.93.012144, 26871060, 2016PhRvE..93a2144D, This conformal field theory is under active investigation using the method of the conformal bootstrap.JOURNAL, 10.1103/PhysRevD.86.025022, D86, 2, 025022, El-Showk, Sheer, Paulos, Miguel F., Poland, David, Rychkov, Slava, Simmons-Duffin, David, Vichi, Alessandro, Solving the 3D Ising Model with the Conformal Bootstrap, Phys. Rev., 2012, 1203.6064, 2012PhRvD..86b5022E, JOURNAL, 10.1007/s10955-014-1042-7, 157, 4–5, 869–914, El-Showk, Sheer, Paulos, Miguel F., Poland, David, Rychkov, Slava, Simmons-Duffin, David, Vichi, Alessandro, Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents, Journal of Statistical Physics, 2014, 1403.4545, 2014JSP...tmp..139E, JOURNAL, 10.1007/JHEP06(2015)174, 1029-8479, 2015, 6, 174, Simmons-Duffin, David, A semidefinite program solver for the conformal bootstrap, Journal of High Energy Physics, 2015, 1502.02033, 2015JHEP...06..174S, WEB
, Kadanoff
, Leo P.
, Deep Understanding Achieved on the 3d Ising Model
, Journal Club for Condensed Matter Physics, April 30, 2014,weblink
, This method currently yields the most precise information about the structure of the critical theory (see Ising critical exponents).In dimensions near four, the critical behavior of the model is understood to correspond to the renormalization behavior of the scalar phi-4 theory (see Kenneth Wilson).

More than four dimensions

In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit.

Local field

The field H is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of H and not the spins. Once the correlations in H are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in H.For any value of the slowly varying field H, the free energy (log-probability) is a local analytic function of H and its gradients. The free energy F(H) is defined to be the sum over all Ising configurations which are consistent with the long wavelength field. Since H is a coarse description, there are many Ising configurations consistent with each value of H, so long as not too much exactness is required for the match.Since the allowed range of values of the spin in any region only depends on the values of H within one averaging volume from that region, the free energy contribution from each region only depends on the value of H there and in the neighboring regions. So F is a sum over all regions of a local contribution, which only depends on H and its derivatives.By symmetry in H, only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy:
beta F = int d^dx left[ A H^2 + sum_{i=1}^d Z_i (partial_i H)^2 + lambda H^4 +cdots right].
On a square lattice, symmetries guarantee that the coefficients Zi of the derivative terms are all equal. But even for an anisotropic Ising model, where the Zi's in different directions are different, the fluctuations in H are isotropic in a coordinate system where the different directions of space are rescaled.On any lattice, the derivative term
Z_{ij} , partial_i H , partial_j H
is a positive definite quadratic form, and can be used to define the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make Zij = δij. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this accidental symmetry is lost.Since βF is a function of a slowly spatially varying field, the probability of any field configuration is:
P(H) propto e^{ - int d^dx left[ AH^2 + Z |nabla H|^2 + lambda H^4 right]}.
The statistical average of any product of H terms is equal to:
langle H(x_1) H(x_2)cdots H(x_n) rangle = { int DH , P(H) H(x_1) H(x_2) cdots H(x_n) over int DH , P(H) }.
The denominator in this expression is called the partition function, and the integral over all possible values of H is a statistical path integral. It integrates exp(βF) over all values of H, over all the long wavelength fourier components of the spins. F is a Euclidean Lagrangian for the field H, the only difference between this and the quantum field theory of a scalar field being that all the derivative terms enter with a positive sign, and there is no overall factor of i.
Z = int DH , e^{ - int d^dx left[ A H^2 + Z |nabla H|^2 + lambda H^4 right]}

Dimensional analysis

The form of F can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of H needs to be determined.In the generic case, choosing the scaling law for H is easy, since the only term that contributes is the first one,
F = int d^dx , A H^2.
This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of H at any point fluctuates completely independently of the value at any other point.The scale of the field can be redefined to absorb the coefficient A, and then it is clear that A only determines the overall scale of fluctuations. The ultralocal model describes the long wavelength high temperature behavior of the Ising model, since in this limit the fluctuation averages are independent from point to point.To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in H go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing A in the system of units where H does not absorb A. The phase transition can only happen when the subleading terms in F can contribute, but since the first term dominates at long distances, the coefficient A must be tuned to zero. This is the location of the critical point:
F= int d^dx left[ t H^2 + lambda H^4 + Z (nabla H)^2 right],
where t is a parameter which goes through zero at the transition.Since t is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once t is small, the scale of the field can either be set to fix the coefficient of the H4 term or the (∇H)2 term to 1.


To find the magnetization, fix the scaling of H so that λ is one. Now the field H has dimension −d/4, so that H4ddx is dimensionless, and Z has dimension 2 âˆ’ d/2. In this scaling, the gradient term is only important at long distances for d ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms.There is one subtle point. The field H is fluctuating statistically, and the fluctuations can shift the zero point of t. To see how, consider H4 split in the following way:
H(x)^4 = -langle H(x)^2rangle^2 + 2langle H(x)^2rangle H(x)^2 + left(H(x)^2 - langle H(x)^2rangleright)^2
The first term is a constant contribution to the free energy, and can be ignored. The second term is a finite shift in t. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of t by dimensional analysis, it is the shifted t that is important. This was historically very confusing, because the shift in t at any finite λ is finite, but near the transition t is very small. The fractional change in t is very large, and in units where t is fixed the shift looks infinite.The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted t,
{partial over partial H } left( t H^2 + lambda H^4 right ) = 2t H + 4lambda H^3 = 0
For t < 0, the minima are at H proportional to the square root of t. So Landau's catastrophe argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean field value.When t is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the flucuations again become ultralocal at long distances.


To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1 âˆ’ d/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the H2 term is 2, while the scale dimension of the H4 term is 4 âˆ’ d. For d < 4, the H4 term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions.This is an essential difference. In dimensions higher than 4, fixing the scale of the gradient term means that the coefficient of the H4 term is less and less important at longer and longer wavelengths. The dimension at which nonquadratic contributions begin to contribute is known as the critical dimension. In the Ising model, the critical dimension is 4.In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as Gaussian averages:
langle S(x)S(y)rangle propto langle H(x)H(y)rangle = G(x-y) = int {dk over (2pi)^d} { e^{ik(x-y)}over k^2 + t }
valid when x âˆ’ y is large. The function G(x âˆ’ y) is the analytic continuation to imaginary time of the Feynman propagator, since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by Wick's theorem. All the odd moments are zero, by ± symmetry. The even moments are the sum over all partition into pairs of the product of G(x âˆ’ y) for each pair.
langle S(x_1) S(x_2) cdots S(x_{2n})rangle = C^n sum G(x_{i1},x_{j1}) G(x_{i2},X_{j2}) ldots G(x_{in},x_{jn})
where C is the proportionality constant. So knowing G is enough. It determines all the multipoint correlations of the field.

The critical two-point function

To determine the form of G, consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy:
&&left(-nabla_x^2 + tright) langle H(x)H(y) rangle &= 0
rightarrow {} && nabla^2 G(x) + tG(x) &= 0
end{align}This is valid at noncoincident points only, since the correlations of H are singular when points collide. H obeys classical equations of motion for the same reason that quantum mechanical operators obey them—its fluctuations are defined by a path integral.At the critical point t = 0, this is Laplace's equation, which can be solved by Gauss's method from electrostatics. Define an electric field analog by
E = nabla G
Away from the origin:
nabla cdot E = 0
since G is spherically symmetric in d dimensions, and E is the radial gradient of G. Integrating over a large d âˆ’ 1 dimensional sphere,
int d^{d-1}S E_r = mathrm{constant}
This gives:
E = {C over r^{d-1} }
and G can be found by integrating with respect to r.
G(r) = {C over r^{d-2} }
The constant C fixes the overall normalization of the field.

G(r) away from the critical point

When t does not equal zero, so that H is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered:
nabla^2 G + t G = 0 to {1 over r^{d - 1}} {d over dr} left( r^{d-1} {dG over dr} right) + t G(r) = 0
For r small compared with sqrt, the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified.To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context:
G(x) = int dtau {1 over left(sqrt{2pitau}right)^d} e^{-{x^2 over 4tau} - ttau}
This is G, since the Fourier transform of this integral is easy. Each fixed Ï„ contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k.
G(k) = int dtau e^{-(k^2 - t)tau} = {1 over k^2 - t}
This is the inverse of the operator ∇2 âˆ’ t in k-space, acting on the unit function in k-space, which is the Fourier transform of a delta function source localized at the origin. So it satisfies the same equation as G with the same boundary conditions that determine the strength of the divergence at 0.The interpretation of the integral representation over the proper time Ï„ is that the two point function is the sum over all random walk paths that link position 0 to position x over time Ï„. The density of these paths at time Ï„ at position x is Gaussian, but the random walkers disappear at a steady rate proportional to t so that the Gaussian at time Ï„ is diminished in height by a factor that decreases steadily exponentially. In the quantum field theory context, these are the paths of relativistically localized quanta in a formalism that follows the paths of individual particles. In the pure statistical context, these paths still appear by the mathematical correspondence with quantum fields, but their interpretation is less directly physical.The integral representation immediately shows that G(r) is positive, since it is represented as a weighted sum of positive Gaussians. It also gives the rate of decay at large r, since the proper time for a random walk to reach position Ï„ is r2 and in this time, the Gaussian height has decayed by e^{-ttau} = e^{-tr^2}. The decay factor appropriate for position r is therefore e^{-sqrt t r}.A heuristic approximation for G(r) is:
G(r) approx { e^{-sqrt t r} over r^{d-2}}
This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of Bessel functions.

Symanzik polymer interpretation

The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the H4 interaction is 4. The term H4 can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the H2 coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 do not intersect.The fractal dimension of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε−2. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. Kurt Symanzik argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof.

4 âˆ’ Îµ dimensions – renormalization group

The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact.The negative logarithm of the probability of any field configuration H is the free energy function
F= int d^4 x left[ {Z over 2} |nabla H|^2 + {tover 2} H^2 + {lambda over 4!} H^4 right] ,
The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a Feynman expansion as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ.Although dimensional analysis shows that both λ and Z are dimensionless, this is misleading. The long wavelength statistical fluctuations are not exactly scale invariant, and only become scale invariant when the interaction strength vanishes.The reason is that there is a cutoff used to define H, and the cutoff defines the shortest wavelength. Fluctuations of H at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified.

Wilson renormalization

A quick heuristic way of studying the scaling is to cut off the H wavenumbers at a point λ. Fourier modes of H with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff.To restore the old cutoff, perform a partial integration over all the wavenumbers which used to be forbidden, but are now fluctuating. In Feynman diagrams, integrating over a fluctuating mode at wavenumber k links up lines carrying momentum k in a correlation function in pairs, with a factor of the inverse propagator.Under rescaling, when the system is shrunk by a factor of (1+b), the t coefficient scales up by a factor (1+b)2 by dimensional analysis. The change in t for infinitesimal b is 2bt. The other two coefficients are dimensionless and do not change at all.The lowest order effect of integrating out can be calculated from the equations of motion:
nabla^2 H + t H = - {lambda over 6} H^3.
This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < k < (1+b)Λ, it will be a slightly different identity.Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the H3 term. In a Feynman diagram expansion, the H3 term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber k gives a change H3 with one dangling line, so proportional to H:
delta H^3 = 3H int_{Lambda

- content above as imported from Wikipedia
- "Ising model" does not exist on GetWiki (yet)
- time: 5:49pm EDT - Wed, May 22 2019
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
M.R.M. Parrott