GetWiki
renormalization
ARTICLE SUBJECTS
being →
database →
ethics →
fiction →
history →
internet →
language →
linux →
logic →
method →
news →
policy →
purpose →
religion →
science →
software →
truth →
unix →
wiki →
ARTICLE TYPES
essay →
feed →
help →
system →
wiki →
ARTICLE ORIGINS
critical →
forked →
imported →
original →
renormalization
please note:
 the content below is remote from Wikipedia
 it has been imported raw for GetWiki
{{Use American Englishdate=January 2019}}{{Use mdy datesdate=January 2019}}{{Short descriptionProcess of assuring meaningful mathematical results in quantum field theory and related disciplines}}{{Renormalization and regularization}}{{Quantum field theorycTopic=Tools}}Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of selfsimilar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of quantities to compensate for effects of their selfinteractions. But even if it were the case that no infinities arose in loop diagrams in quantum field theory, it could be shown that renormalization of mass and fields appearing in the original Lagrangian is necessary.See e.g., Weinberg vol I, chapter 10.For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electronsystem behaves as if it had a different mass and charge than initially postulated. Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. Â Mathematics and experiments prove that positrons and more massive particles like protons, exhibit precisely the same observed charge as the electron  even in the presence of much stronger interactions and more intense clouds of virtual particles.Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. In highenergy particle accelerators like the CERN Large Hadron Collider the concept named pileup occurs when undesirable protonproton collisions interact with data collection for simultaneous, nearby desirable measurements. Physically, the pileup of contributions from an infinity of scales involved in a problem may then result in further infinities. When describing spacetime as a continuum, certain statistical and quantum mechanical constructions are not welldefined. To define them, or make them unambiguous, a continuum limit must carefully remove "construction scaffolding" of lattices at various scales. Renormalization procedures are based on the requirement that certain physical quantities (such as the mass and charge of an electron) equal observed (experimental) values. That is, the experimental value of the physical quantity yields practical applications, but due to their empirical nature the observed measurement represents areas of quantum field theory that require deeper derivation from theoretical bases.Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and selfconsistent actual mechanism of scale physics in several fields of physics and mathematics.Today, the point of view has shifted: on the basis of the breakthrough renormalization group insights of Nikolay Bogolyubov and Kenneth Wilson, the focus is on variation of physical quantities across contiguous scales, while distant scales are related to each other through "effective" descriptions. All scales are linked in a broadly systematic way, and the actual physics pertinent to each is extracted with the suitable specific computational techniques appropriate for each. Wilson clarified which variables of a system are crucial and which are redundant.Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales. the content below is remote from Wikipedia
 it has been imported raw for GetWiki
Selfinteractions in classical physics
missing image!
 Renormalizedvertex.png 
thumbnailFigure 1. Renormalization in quantum electrodynamics: The simple electron/photon interaction that determines the electron's charge at one renormalization point is revealed to consist of more complicated interactions at another.
The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.The mass of a charged particle should include the massenergy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius {{mvarre}}. The massâ€“energy in the field is
 Renormalizedvertex.png 
thumbnailFigure 1. Renormalization in quantum electrodynamics: The simple electron/photon interaction that determines the electron's charge at one renormalization point is revealed to consist of more complicated interactions at another.
m_text{em} = int frac{1}{2} E^2 , dV = int_{r_e}^infty frac{1}{2} left( frac{q}{4pi r^2} right)^2 4pi r^2 , dr = frac{q^2}{8pi r_e},
which becomes infinite as {{mathre â†’ 0}}. This implies that the point particle would have infinite inertia, making it unable to be accelerated. Incidentally, the value of {{mvarre}} that makes m_text{em} equal to the electron mass is called the classical electron radius, which (setting q = e and restoring factors of {{mvarc}} and varepsilon_0) turns out to be
r_e = frac{e^2}{4pivarepsilon_0 m_e c^2} = alpha frac{hbar}{m_e c} approx 2.8 times 10^{15}~text{m},
where alpha approx 1/137 is the finestructure constant, and hbar/(m_e c) is the Compton wavelength of the electron.Renormalization: The total effective mass of a spherical charged particle includes the actual bare mass of the spherical shell (in addition to the mass mentioned above associated with its electric field). If the shell's bare mass is allowed to be negative, it might be possible to take a consistent point limit.{{Citation neededdate=March 2015}} This was called renormalization, and Lorentz and Abraham attempted to develop a classical theory of the electron this way. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory.(See also regularization (physics) for an alternative way to remove infinities from this classical problem, assuming new physics exists at small scales.)When calculating the electromagnetic interactions of charged particles, it is tempting to ignore the backreaction of a particle's own field on itself. (Analogous to the backEMF of circuit analysis.) But this backreaction is necessary to explain the friction on charged particles when they emit radiation. If the electron is assumed to be a point, the value of the backreaction diverges, for the same reason that the mass diverges, because the field is inversesquare.The Abrahamâ€“Lorentz theory had a noncausal "preacceleration." Sometimes an electron would start moving before the force is applied. This is a sign that the point limit is inconsistent.The trouble was worse in classical field theory than in quantum field theory, because in quantum field theory a charged particle experiences Divergences in quantum electrodynamics
{{anchor">renormalization_loop_divergence}}missing image!
 vacuum polarization.svgthumb200px(a) Vacuum polarization, a.k.a. charge screening. This loop has a logarithmic ultraviolet divergence.thumb200px(b) Selfenergy diagram in QEDPenguin diagram.JPG 
When developing quantum electrodynamics in the 1930s, Max Born, Werner Heisenberg, Pascual Jordan, and Paul Dirac discovered that in perturbative corrections many integrals were divergent (see The problem of infinities).One way of describing the perturbation theory corrections' divergences was discovered in 1947â€“49 by Hans Kramers,Kramers presented his work at the 1947 Shelter Island Conference, repeated in 1948 at the Solvay Conference. The latter did not appear in print until the Proceedings of the Solvay Conference, published in 1950 (see Laurie M. Brown (ed.), Renormalization: From Lorentz to Landau (and Beyond), Springer, 2012, p. 53). Kramers' approach was nonrelativistic (see Jagdish Mehra, Helmut Rechenberg, The Conceptual Completion and Extensions of Quantum Mechanics 19321941. Epilogue: Aspects of the Further Development of Quantum Theory 19421999: Volumes 6, Part 2, Springer, 2001, p. 1050). Hans Bethe,JOURNAL, H. Bethe, Hans Bethe, 1947, The Electromagnetic Shift of Energy Levels, Physical Review, 72, 339â€“341, 10.1103/PhysRev.72.339, 1947PhRv...72..339B, 4, Julian Schwinger,JOURNAL, Schwinger, J., On quantumelectrodynamics and the magnetic moment of the electron, Physical Review, 73, 4, 416â€“417, 1948, 10.1103/PhysRev.73.416, JOURNAL, Schwinger, J., Quantum Electrodynamics, I. A covariant formulation, Physical Review, 74, 10, 1439â€“1461, 1948, 10.1103/PhysRev.74.1439, JOURNAL, Schwinger, J., Quantum Electrodynamics, II. Vacuum polarization and selfenergy, Physical Review, 75, 4, 651â€“679, 1949, 10.1103/PhysRev.75.651, JOURNAL, Schwinger, J., Quantum Electrodynamics, III. The electromagnetic properties of the electron radiative corrections to scattering, Physical Review, 76, 6, 790â€“817, 1949, 10.1103/PhysRev.76.790, Richard Feynman,JOURNAL, Richard P., Feynman, Spacetime approach to nonrelativistic quantum mechanics, Reviews of Modern Physics, 20, 367â€“387, 1948, 10.1103/RevModPhys.20.367, 1948RvMP...20..367F, 2,weblink JOURNAL, Feynman, Richard P., A relativistic cutoff for classical electrodynamics, Physical Review, 74, 8, 939â€“946, 1948, 10.1103/PhysRev.74.939, 1948PhRv...74..939F,weblink JOURNAL, Richard P., Feynman, A relativistic cutoff for quantum electrodynamics, Physical Review, 74, 1430â€“1438, 1948, 10.1103/PhysRev.74.1430, 1948PhRv...74.1430F, 10,weblink and Shin'ichiro Tomonaga,Tomonaga, S. (1946) "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields." Prog. Theor. Phys. 1, 27â€“42.Koba, Z., Tati, T. and Tomonaga, S. (1947) "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. II." Prog. Theor. Phys. 2, 101â€“116.Koba, Z., Tati, T. and Tomonaga, S. (1947) "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. III." Prog. Theor. Phys. 2, 198â€“208.Kanesawa, S. and Tomonaga, S. (1948) "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. IV." Prog. Theor. Phys. 3, 1â€“13.Kanesawa, S. and Tomonaga, S. "On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields. V." Prog. Theor. Phys. 3, 101â€“113 (1948)Koba, Z. and Tomonaga, S. (1948) "On Radiation Reactions in Collision Processes. I." Prog. Theor. Phys. 3, 290â€“303Tomonaga, S. and Oppenheimer, J. R. (1948) "On Infinite Field Reactions in Quantum Field Theory." Phys. Rev. 74, 224â€“225. and systematized by Freeman Dyson in 1949.JOURNAL, Dyson, F. J., The radiation theories of Tomonaga, Schwinger, and Feynman, Phys. Rev., 75, 486â€“502, 1949, 10.1103/PhysRev.75.486, 3, 1949PhRv...75..486D, The divergences appear in radiative corrections involving Feynman diagrams with closed loops of virtual particles in them.While virtual particles obey conservation of energy and momentum, they can have any energy and momentum, even one that is not allowed by the relativistic energyâ€“momentum relation for the observed mass of that particle (that is, E^2  p^2 is not necessarily the squared mass of the particle in that process, e.g. for a photon it could be nonzero). Such a particle is called offshell. When there is a loop, the momentum of the particles involved in the loop is not uniquely determined by the energies and momenta of incoming and outgoing particles. A variation in the energy of one particle in the loop can be balanced by an equal and opposite change in the energy of another particle in the loop, without affecting the incoming and outgoing particles. Thus many variations are possible. So to find the amplitude for the loop process, one must integrate over all possible combinations of energy and momentum that could travel around the loop.These integrals are often divergent, that is, they give infinite answers. The divergences that are significant are the "ultraviolet" (UV) ones. An ultraviolet divergence can be described as one that comes from  vacuum polarization.svgthumb200px(a) Vacuum polarization, a.k.a. charge screening. This loop has a logarithmic ultraviolet divergence.thumb200px(b) Selfenergy diagram in QEDPenguin diagram.JPG 
 the region in the integral where all particles in the loop have large energies and momenta,
 very short wavelengths and highfrequencies fluctuations of the fields, in the path integral for the field,
 very short propertime between particle emission and absorption, if the loop is thought of as a sum over particle paths.
(a) A photon creates a virtual electronâ€“positron pair, which then annihilates. This is a vacuum polarization diagram.
(b) An electron quickly emits and reabsorbs a virtual photon, called a selfenergy.
(c) An electron emits a photon, emits a second photon, and reabsorbs the first. This process is shown in the section below in figure 2, and it is called a vertex renormalization. The Feynman diagram for this is also called a â€œpenguin diagramâ€ due to its shape remotely resembling a penguin (with the initial and final state electrons as the arms and legs, the second photon as the body and the first looping photon as the head).
The three divergences correspond to the three parameters in the theory under consideration:  The field normalization Z.
 The mass of the electron.
 The charge of the electron.
left( p^2  a^2 right)^{frac{1}{2}}
is well defined at {{mathp {{=}} a}} but is UV divergent; if we take the {{frac32}}th fractional derivative with respect to {{mathâˆ’a2}}, we obtain the IR divergence
frac{1}{p^2  a^2},
so we can cure IR divergences by turning them into UV divergences.{{clarifydate=May 2012}}A loop divergence
missing image!
 Loopdiagram.png 
thumbnailFigure 2. A diagram contributing to electronâ€“electron scattering in QED. The loop has an ultraviolet divergence.
The diagram in Figure 2 shows one of the several oneloop contributions to electronâ€“electron scattering in QED. The electron on the left side of the diagram, represented by the solid line, starts out with 4momentum {{mathpÎ¼}} and ends up with 4momentum {{mathrÎ¼}}. It emits a virtual photon carrying {{mathrÎ¼ âˆ’ pÎ¼}} to transfer energy and momentum to the other electron. But in this diagram, before that happens, it emits another virtual photon carrying 4momentum {{mathqÎ¼}}, and it reabsorbs this one after emitting the other virtual photon. Energy and momentum conservation do not determine the 4momentum {{mathqÎ¼}} uniquely, so all possibilities contribute equally and we must integrate.This diagram's amplitude ends up with, among other things, a factor from the loop of
 Loopdiagram.png 
thumbnailFigure 2. A diagram contributing to electronâ€“electron scattering in QED. The loop has an ultraviolet divergence.
ie^3 int frac{d^4 q}{(2pi)^4} gamma^mu frac{i (gamma^alpha (r  q)_alpha + m)}{(r  q)^2  m^2 + i epsilon} gamma^rho frac{i (gamma^beta (p  q)_beta + m)}{(p  q)^2  m^2 + i epsilon} gamma^nu frac{i g_{munu}}{q^2 + iepsilon}.
The various {{mathÎ³Î¼}} factors in this expression are gamma matrices as in the covariant formulation of the Dirac equation; they have to do with the spin of the electron. The factors of {{mvare}} are the electric coupling constant, while the iepsilon provide a heuristic definition of the contour of integration around the poles in the space of momenta. The important part for our purposes is the dependency on {{mathqÎ¼}} of the three big factors in the integrand, which are from the propagators of the two electron lines and the photon line in the loop.This has a piece with two powers of {{mathqÎ¼}} on top that dominates at large values of {{mathqÎ¼}} (Pokorski 1987, p. 122):
e^3 gamma^mu gamma^alpha gamma^rho gamma^beta gamma_mu int frac{d^4 q}{(2pi)^4} frac{q_alpha q_beta}{(r  q)^2 (p  q)^2 q^2}.
This integral is divergent and infinite, unless we cut it off at finite energy and momentum in some way.Similar loop divergences occur in other quantum field theories.Renormalized and bare quantities
The solution was to realize that the quantities initially appearing in the theory's formulae (such as the formula for the Lagrangian), representing such things as the electron's electric charge and mass, as well as the normalizations of the quantum fields themselves, did not actually correspond to the physical constants measured in the laboratory. As written, they were bare quantities that did not take into account the contribution of virtualparticle loop effects to the physical constants themselves. Among other things, these effects would include the quantum counterpart of the electromagnetic backreaction that so vexed classical theorists of electromagnetism. In general, these effects would be just as divergent as the amplitudes under consideration in the first place; so finite measured quantities would, in general, imply divergent bare quantities.To make contact with reality, then, the formulae would have to be rewritten in terms of measurable, renormalized quantities. The charge of the electron, say, would be defined in terms of a quantity measured at a specific kinematic renormalization point or subtraction point (which will generally have a characteristic energy, called the renormalization scale or simply the energy scale). The parts of the Lagrangian left over, involving the remaining portions of the bare quantities, could then be reinterpreted as counterterms, involved in divergent diagrams exactly canceling out the troublesome divergences for other diagrams.Renormalization in QED
missing image!
 Counterterm.png 
thumbnailFigure 3. The vertex corresponding to the {{mathZ1}} counterterm cancels the divergence in Figure 2.
For example, in the Lagrangian of QED
 Counterterm.png 
thumbnailFigure 3. The vertex corresponding to the {{mathZ1}} counterterm cancels the divergence in Figure 2.
mathcal{L}=barpsi_Bleft[igamma_mu left (partial^mu + ie_BA_B^mu right )m_Bright]psi_B frac{1}{4}F_{Bmunu}F_B^{munu}
the fields and coupling constant are really bare quantities, hence the subscript {{mvarB}} above. Conventionally the bare quantities are written so that the corresponding Lagrangian terms are multiples of the renormalized ones:
left(barpsi m psiright)_B = Z_0 barpsi m psi
left(barpsileft(partial^mu + ieA^mu right )psiright)_B = Z_1 barpsi left (partial^mu + ieA^mu right)psi
left(F_{munu}F^{munu}right)_B = Z_3, F_{munu}F^{munu}.
Gauge invariance, via a Wardâ€“Takahashi identity, turns out to imply that we can renormalize the two terms of the covariant derivative piece
bar psi (partial + ieA) psi
together (Pokorski 1987, p. 115), which is what happened to {{mathZ2}}; it is the same as {{mathZ1}}.A term in this Lagrangian, for example, the electronphoton interaction pictured in Figure 1, can then be written
mathcal{L}_I = e barpsi gamma_mu A^mu psi  (Z_1  1) e barpsi gamma_mu A^mu psi
The physical constant {{mvare}}, the electron's charge, can then be defined in terms of some specific experiment: we set the renormalization scale equal to the energy characteristic of this experiment, and the first term gives the interaction we see in the laboratory (up to small, finite corrections from loop diagrams, providing such exotica as the highorder corrections to the magnetic moment). The rest is the counterterm. If the theory is renormalizable (see below for more on this), as it is in QED, the divergent parts of loop diagrams can all be decomposed into pieces with three or fewer legs, with an algebraic form that can be canceled out by the second term (or by the similar counterterms that come from {{mathZ0}} and {{mathZ3}}).The diagram with the {{mathZ1}} counterterm's interaction vertex placed as in Figure 3 cancels out the divergence from the loop in Figure 2.Historically, the splitting of the "bare terms" into the original terms and counterterms came before the renormalization group insight due to Kenneth Wilson.K. G. Wilson (1975), "The renormalization group: critical phenomena and the Kondo problem," Rev. Mod. Phys. 47, 4, 773. According to such renormalization group insights, detailed in the next section, this splitting is unnatural and actually unphysical, as all scales of the problem enter in continuous systematic ways.Running couplings
To minimize the contribution of loop diagrams to a given calculation (and therefore make it easier to extract results), one chooses a renormalization point close to the energies and momenta exchanged in the interaction. However, the renormalization point is not itself a physical quantity: the physical predictions of the theory, calculated to all orders, should in principle be independent of the choice of renormalization point, as long as it is within the domain of application of the theory. Changes in renormalization scale will simply affect how much of a result comes from Feynman diagrams without loops, and how much comes from the remaining finite parts of loop diagrams. One can exploit this fact to calculate the effective variation of physical constants with changes in scale. This variation is encoded by betafunctions, and the general theory of this kind of scaledependence is known as the renormalization group.Colloquially, particle physicists often speak of certain physical "constants" as varying with the energy of interaction, though in fact, it is the renormalization scale that is the independent quantity. This running does, however, provide a convenient means of describing changes in the behavior of a field theory under changes in the energies involved in an interaction. For example, since the coupling in quantum chromodynamics becomes small at large energy scales, the theory behaves more like a free theory as the energy exchanged in an interaction becomes large â€“ a phenomenon known as asymptotic freedom. Choosing an increasing energy scale and using the renormalization group makes this clear from simple Feynman diagrams; were this not done, the prediction would be the same, but would arise from complicated highorder cancellations.For example,
I=int_0^a frac{1}{z},dzint_0^b frac{1}{z},dz=ln aln bln 0 +ln 0
is illdefined.To eliminate the divergence, simply change lower limit of integral into {{mvarÎµa}} and {{mvarÎµb}}:
I=ln aln bln{varepsilon_a}+ln{varepsilon_b} = ln tfrac{a}{b}  ln tfrac{varepsilon_a}{varepsilon_b}
Making sure {{math{{sfracÎµbÎµa}} â†’ 1}}, then {{mathI {{=}} ln {{sfracab}}.}}Regularization
Since the quantity {{mathâˆž âˆ’ âˆž}} is illdefined, in order to make this notion of canceling divergences precise, the divergences first have to be tamed mathematically using the theory of limits, in a process known as regularization (Weinberg, 1995).An essentially arbitrary modification to the loop integrands, or regulator, can make them drop off faster at high energies and momenta, in such a manner that the integrals converge. A regulator has a characteristic energy scale known as the cutoff; taking this cutoff to infinity (or, equivalently, the corresponding length/time scale to zero) recovers the original integrals.With the regulator in place, and a finite value for the cutoff, divergent terms in the integrals then turn into finite but cutoffdependent terms. After canceling out these terms with the contributions from cutoffdependent counterterms, the cutoff is taken to infinity and finite physical results recovered. If physics on scales we can measure is independent of what happens at the very shortest distance and time scales, then it should be possible to get cutoffindependent results for calculations.Many different types of regulator are used in quantum field theory calculations, each with its advantages and disadvantages. One of the most popular in modern use is dimensional regularization, invented by Gerardus 't Hooft and Martinus J. G. Veltman,JOURNAL, 't Hooft, G., Veltman, M., 10.1016/05503213(72)902799, Regularization and renormalization of gauge fields, Nuclear Physics B, 44, 1, 189â€“213, 1972, 1972NuPhB..44..189T, which tames the integrals by carrying them into a space with a fictitious fractional number of dimensions. Another is Pauliâ€“Villars regularization, which adds fictitious particles to the theory with very large masses, such that loop integrands involving the massive particles cancel out the existing loops at large momenta.Yet another regularization scheme is the lattice regularization, introduced by Kenneth Wilson, which pretends that hypercubical lattice constructs our spacetime with fixed grid size. This size is a natural cutoff for the maximal momentum that a particle could possess when propagating on the lattice. And after doing a calculation on several lattices with different grid size, the physical result is extrapolated to grid size 0, or our natural universe. This presupposes the existence of a scaling limit.A rigorous mathematical approach to renormalization theory is the socalled causal perturbation theory, where ultraviolet divergences are avoided from the start in calculations by performing welldefined mathematical operations only within the framework of distribution theory. In this approach, divergences are replaced by ambiguity: corresponding to a divergent diagram is a term which now has a finite, but undetermined, coefficient. Other principles, such as gauge symmetry, must then be used to reduce or eliminate the ambiguity.Zeta function regularization
Julian Schwinger discovered a relationship{{citation neededdate=June 2012}} between zeta function regularization and renormalization, using the asymptotic relation:
I(n, Lambda )= int_0^{Lambda }dp,p^n sim 1+2^n+3^n+cdots+ Lambda^n to zeta(n)
as the regulator {{mathÎ› â†’ âˆž}}. Based on this, he considered using the values of {{mathÎ¶(âˆ’n)}} to get finite results. Although he reached inconsistent results, an improved formula studied by Hartle, J. Garcia, and based on the works by E. Elizalde includes the technique of the zeta regularization algorithm
I(n, Lambda) = frac{n}{2}I(n1, Lambda) + zeta(n)  sum_{r=1}^{infty}frac{B_{2r}}{(2r)!} a_{n,r}(n2r+1) I(n2r, Lambda),
where the B's are the Bernoulli numbers and
a_{n,r}= frac{Gamma(n+1)}{Gamma(n2r+2)}.
So every {{mathI(m, Î›)}} can be written as a linear combination of {{mathÎ¶(âˆ’1), Î¶(âˆ’3), Î¶(âˆ’5), ..., Î¶(âˆ’m)}}.Or simply using Abelâ€“Plana formula we have for every divergent integral:
zeta(m, beta )frac{beta ^{m}}{2}iint_ 0 ^{infty}dt frac{ (it+beta)^{m}(it+beta)^{m}}{e^{2 pi t}1}=int_0^infty dp , (p+beta)^m
valid when {{mathm > 0}}, Here the zeta function is Hurwitz zeta function and Beta is a positive real number.The "geometric" analogy is given by, (if we use rectangle method) to evaluate the integral so:
int_0^infty dx , (beta +x)^m approx sum_{n=0}^infty h^{m+1} zeta left( beta h^{1} , m right)
Using Hurwitz zeta regularization plus the rectangle method with step h (not to be confused with Planck's constant).The logarithmic divergent integral has the regularization
sum_{n=0}^{infty} frac{1}{n+a}=  psi (a)+log (a)
since for the Harmonic series sum_{n=0}^{infty} frac{1}{an+1} in the limit a to 0 we must recover the series sum_{n=0}^{infty}1 =1/2 For multiloop integrals that will depend on several variables k_1, cdots, k_n we can make a change of variables to polar coordinates and then replace the integral over the angles int d Omega by a sum so we have only a divergent integral, that will depend on the modulus r^2 = k_1^2 +cdots+k_n^2 and then we can apply the zeta regularization algorithm, the main idea for multiloop integrals is to replace the factor F(q_1,cdots,q_n) after a change to hyperspherical coordinates {{mathF(r, Î©)}} so the UV overlapping divergences are encoded in variable {{mvarr}}. In order to regularize these integrals one needs a regulator, for the case of multiloop integrals, these regulator can be taken as
left (1+ sqrt{q}_{i}q^{i} right )^{s}
so the multiloop integral will converge for big enough {{mvars}} using the Zeta regularization we can analytic continue the variable {{mvars}} to the physical limit where {{maths {{=}} 0}} and then regularize any UV integral, by replacing a divergent integral by a linear combination of divergent series, which can be regularized in terms of the negative values of the Riemann zeta function {{mathÎ¶(âˆ’m)}}.Attitudes and interpretation
The early formulators of QED and other quantum field theories were, as a rule, dissatisfied with this state of affairs. It seemed illegitimate to do something tantamount to subtracting infinities from infinities to get finite answers.Freeman Dyson argued that these infinities are of a basic nature and cannot be eliminated by any formal mathematical procedures, such as the renormalization method.F. J. Dyson, Phys. Rev. 85 (1952) 631.A. W. Stern, Science 116 (1952) 493.Dirac's criticism was the most persistent.P.A.M. Dirac, "The Evolution of the Physicist's Picture of Nature," in Scientific American, May 1963, p. 53. As late as 1975, he was saying:Kragh, Helge; Dirac: A scientific biography, CUP 1990, p. 184
Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation because this socalled 'good theory' does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves disregarding a quantity when it is small â€“ not neglecting it just because it is infinitely great and you do not want it!
Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985:Feynman, Richard P.; , Penguin 1990, p. 128
The shell game that we play is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocuspocus has prevented us from proving that the theory of quantum electrodynamics is mathematically selfconsistent. It's surprising that the theory still hasn't been proved selfconsistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.
While Dirac's criticism was based on the procedure of renormalization itself, Feynman's criticism was very different. Feynman was concerned that all field theories known in the 1960s had the property that the interactions become infinitely strong at short enough distance scales. This property called a Landau pole, made it plausible that quantum field theories were all inconsistent. In 1974, Gross, Politzer and Wilczek showed that another quantum field theory, quantum chromodynamics, does not have a Landau pole. Feynman, along with most others, accepted that QCD was a fully consistent theory.{{Citation neededdate=December 2009}}The general unease was almost universal in texts up to the 1970s and 1980s. Beginning in the 1970s, however, inspired by work on the renormalization group and effective field theory, and despite the fact that Dirac and various othersâ€”all of whom belonged to the older generationâ€”never withdrew their criticisms, attitudes began to change, especially among younger theorists. Kenneth G. Wilson and others demonstrated that the renormalization group is useful in statistical field theory applied to condensed matter physics, where it provides important insights into the behavior of phase transitions. In condensed matter physics, a physical shortdistance regulator exists: matter ceases to be continuous on the scale of atoms. Shortdistance divergences in condensed matter physics do not present a philosophical problem since the field theory is only an effective, smoothedout representation of the behavior of matter anyway; there are no infinities since the cutoff is always finite, and it makes perfect sense that the bare quantities are cutoffdependent.If QFT holds all the way down past the Planck length (where it might yield to string theory, causal set theory or something different), then there may be no real problem with shortdistance divergences in particle physics either; all field theories could simply be effective field theories. In a sense, this approach echoes the older attitude that the divergences in QFT speak of human ignorance about the workings of nature, but also acknowledges that this ignorance can be quantified and that the resulting effective theories remain useful.Be that as it may, Salam's remarkC. J. Isham, A. Salam, and J. Strathdee, "Infinity Suppression Gravity Modified Quantum Electrodynamics II," Phys. Rev. D5, 2548 (1972) in 1972 seems still relevant
Fieldtheoretic infinities â€” first encountered in Lorentz's computation of electron selfmass â€” have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirtyfive years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may, after all, be circumvented â€” and finite values for the renormalization constants computed â€” is considered irrational. Compare Russell's postscript to the third volume of his autobiography The Final Years, 1944â€“1969 (George Allen and Unwin, Ltd., London 1969),Russell, Bertrand. The Autobiography of Bertrand Russell: The Final Years, 19441969 (Bantam Books, 1970) p. 221:
In the modern world, if communities are unhappy, it is often because they have ignorances, habits, beliefs, and passions, which are dearer to them than happiness or even life. I find many men in our dangerous age who seem to be in love with misery and death, and who grow angry when hopes are suggested to them. They think hope is irrational and that, in sitting down to lazy despair, they are merely facing facts.
Renormalizability
From this philosophical reassessment, a new concept follows naturally: the notion of renormalizability. Not all theories lend themselves to renormalization in the manner described above, with a finite supply of counterterms and all quantities becoming cutoffindependent at the end of the calculation. If the Lagrangian contains combinations of field operators of high enough dimension in energy units, the counterterms required to cancel all divergences proliferate to infinite number, and, at first glance, the theory would seem to gain an infinite number of free parameters and therefore lose all predictive power, becoming scientifically worthless. Such theories are called nonrenormalizable.The Standard Model of particle physics contains only renormalizable operators, but the interactions of general relativity become nonrenormalizable operators if one attempts to construct a field theory of quantum gravity in the most straightforward manner (treating the metric in the Einsteinâ€“Hilbert Lagrangian as a perturbation about the Minkowski metric), suggesting that perturbation theory is useless in application to quantum gravity.However, in an effective field theory, "renormalizability" is, strictly speaking, a misnomer. In nonrenormalizable effective field theory, terms in the Lagrangian do multiply to infinity, but have coefficients suppressed by evermoreextreme inverse powers of the energy cutoff. If the cutoff is a real, physical quantityâ€”that is, if the theory is only an effective description of physics up to some maximum energy or minimum distance scaleâ€”then these additional terms could represent real physical interactions. Assuming that the dimensionless constants in the theory do not get too large, one can group calculations by inverse powers of the cutoff, and extract approximate predictions to finite order in the cutoff that still have a finite number of free parameters. It can even be useful to renormalize these "nonrenormalizable" interactions.Nonrenormalizable interactions in effective field theories rapidly become weaker as the energy scale becomes much smaller than the cutoff. The classic example is the Fermi theory of the weak nuclear force, a nonrenormalizable effective theory whose cutoff is comparable to the mass of the W particle. This fact may also provide a possible explanation for why almost all of the particle interactions we see are describable by renormalizable theories. It may be that any others that may exist at the GUT or Planck scale simply become too weak to detect in the realm we can observe, with one exception: gravity, whose exceedingly weak interaction is magnified by the presence of the enormous masses of stars and planets.{{Citation neededdate=February 2010}}Renormalization schemes
In actual calculations, the counterterms introduced to cancel the divergences in Feynman diagram calculations beyond tree level must be fixed using a set of renormalisation conditions. The common renormalization schemes in use include: Minimal subtraction (MS) scheme and the related modified minimal subtraction (MSbar) scheme
 Onshell scheme
Renormalization in statistical physics
History
A deeper understanding of the physical meaning and generalization of therenormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "blockspin" renormalization group.L.P. Kadanoff (1966): "Scaling laws for Ising models near T_c", Physics (Long Island City, N.Y.) 2, 263. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances.This approach covered the conceptual point and was given full computational substanceK.G. Wilson (1975): "The renormalization group: critical phenomena and the Kondo problem", Rev. Mod. Phys. 47, 4, 773. in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a longstanding problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of secondorder phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.Principles
In more technical terms, let us assume that we have a theory describedby a certain function Z of the state variables{s_i} and a certain set of coupling constants{J_k}. This function may be a partition function,an action, a Hamiltonian, etc. It must contain thewhole description of the physics of the system.Now we consider a certain blocking transformation of the statevariables {s_i}to {tilde s_i},the number of tilde s_i must be lower than the number ofs_i. Now let us try to rewrite the Zfunction only in terms of the tilde s_i. If this is achievable by acertain change in the parameters, {J_k}to{tilde J_k}, then the theory is said to berenormalizable.The possiblemacroscopic states of the system, at a large scale, are given by thisset of fixed points.Renormalization group fixed points
The most important information in the RG flow is its fixed points. A fixed point is defined by the vanishing of the beta function associated to the flow. Then, fixed points of the renormalization group are by definition scale invariant. In many cases of physical interest scale invariance enlarges to conformal invariance. One then has a conformal field theory at the fixed point. The ability of several theories to flow to the same fixed point leads to universality.If these fixed points correspond to free field theory, the theory is said to exhibit quantum triviality. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.JOURNAL, D. J. E. Callaway, 1988journal=Physics Reports  issue=5  doi=10.1016/03701573(88)900087  authorlink=David J E Callaway, See alsoReferences{{reflist35em}}Further readingGeneral introduction
Mainly: quantum field theory
Mainly: statistical physics
Miscellaneous

 content above as imported from Wikipedia
 "renormalization" does not exist on GetWiki (yet)
 time: 7:13am EDT  Sun, Oct 20 2019
 "renormalization" does not exist on GetWiki (yet)
 time: 7:13am EDT  Sun, Oct 20 2019
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GetMeta:About
GetWiki
GetWiki
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
Biographies
GETWIKI 20 AUG 2014
GetMeta:News
GetWiki
GetWiki
GETWIKI 19 AUG 2014
© 2019 M.R.M. PARROTT  ALL RIGHTS RESERVED