SUPPORT THE WORK

GetWiki

Normal distribution

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
Normal distribution
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Short description|Probability distribution}}{{Redirect|Bell curve}}{{Use mdy dates|date=August 2012}}







factoids
e^{-frac{1}{2}left(frac{x - mu}{sigma}right)^2}
| cdf = Phileft(frac{x-mu}{sigma}right) = frac{1}{2}left[1 + operatorname{erf}left( frac{x-mu}{sigmasqrt{2}}right)right]
| quantile = mu+sigmasqrt{2} operatorname{erf}^{-1}(2p-1)
| mean = mu
| median = mu
| mode = mu
| variance = sigma^2
| mad = sigmasqrt{2/pi}
| skewness = 0
| kurtosis = 0
| entropy = frac{1}{2} log(2pi e sigma^2)
| mgf = exp(mu t + sigma^2t^2/2)
| char = exp(imu t - sigma^2 t^2/2)
| fisher = mathcal{I}(mu,sigma) =begin {pmatrix} 1/sigma^2 & 0 0 & 2/sigma^2end{pmatrix}
mathcal{I}(mu,sigma^2) =begin {pmatrix} 1/sigma^2 & 0 0 & 1/(2sigma^4)end{pmatrix}
| KLDiv = { 1 over 2 } left{ left( frac{sigma_0}{sigma_1} right)^2 + frac{(mu_1 - mu_0)^2}{sigma_1^2} - 1 + ln {sigma_1^2 over sigma_0^2} right}
| ES = mu - sigma frac{frac{1}{sqrt{2pi}} e^{frac{-left(q_pleft(frac{X-mu}{sigma}right)right)^2}{2}}}{1-p}JOURNAL, Norton, Matthew, Khokhlov, Valentyn, Uryasev, Stan, 2019, Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation, Annals of Operations Research, 299, 1–2, 1281–1315, Springer, 10.1007/s10479-019-03373-1, 1811.11301, 254231768,weblink 2023-02-27,
}}{{Probability fundamentals}}In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = frac{1}{sigma sqrt{2pi} } e^{-frac{1}{2}left(frac{x-mu}{sigma}right)^2}The parameter mu is the mean or expectation of the distribution (and also its median and mode), while the parameter sigma is its standard deviation. The variance of the distribution is sigma^2. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.Normal Distribution, Gale Encyclopedia of Psychology{{harvtxt |Casella |Berger |2001 |p=102 }} Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.Lyon, A. (2014). Why are Normal Distributions Normal?, The British Journal for the Philosophy of Science.Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squaresBOOK, Jorge, Nocedal, Numerical Optimization, Stephan, J. Wright, Springer, 2006, 978-0387-30303-1, 2nd, 249, parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed.A normal distribution is sometimes informally called a bell curve.WEB, Normal Distribution,weblink 2020-08-15, www.mathsisfun.com, However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions). For other names, see Naming.The univariate probability distribution is generalized for vectors in the multivariate normal distribution and for matrices in the matrix normal distribution.

Definitions

Standard normal distribution

The simplest case of a normal distribution is known as the standard normal distribution or unit normal distribution. This is a special case when mu=0 and sigma =1, and it is described by this probability density function (or density):
varphi(z) = frac{e^{-z^2/2}}{sqrt{2pi}}.
The variable z has a mean of 0 and a variance and standard deviation of 1. The density varphi(z) has its peak 1/sqrt{2pi} at z=0 and inflection points at z=+1 and z=-1.Although the density above is most commonly known as the standard normal, a few authors have used that term to describe other versions of the normal distribution. Carl Friedrich Gauss, for example, once defined the standard normal as
varphi(z) = frac{e^{-z^2}}{sqrtpi},
which has a variance of 1/2, and Stephen Stigler{{harvtxt |Stigler |1982 }} once defined the standard normal as
varphi(z) = e^{-pi z^2},
which has a simple functional form and a variance of sigma^2 = 1/(2pi).

General normal distribution

Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor sigma (the standard deviation) and then translated by mu (the mean value):
f(x mid mu, sigma^2) =frac 1 sigma varphileft(frac{x-mu} sigma right)The probability density must be scaled by 1/sigma so that the integral is still 1.If Z is a standard normal deviate, then X=sigma Z + mu will have a normal distribution with expected value mu and standard deviation sigma. This is equivalent to saying that the standard normal distribution Z can be scaled/stretched by a factor of sigma and shifted by mu to yield a different normal distribution, called X. Conversely, if X is a normal deviate with parameters mu and sigma^2, then this X distribution can be re-scaled and shifted via the formula Z=(X-mu)/sigma to convert it to the standard normal distribution. This variate is also called the standardized form of X.

Notation

The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter phi (phi).{{harvtxt |Halperin |Hartley |Hoel |1965 |loc=item 7 }} The alternative form of the Greek letter phi, varphi, is also used quite often.The normal distribution is often referred to as N(mu,sigma^2) or mathcal{N}(mu,sigma^2).{{harvtxt |McPherson |1990 |p=110 }} Thus when a random variable X is normally distributed with mean mu and standard deviation sigma, one may write
X sim mathcal{N}(mu,sigma^2).

Alternative parameterizations

Some authors advocate using the precision tau as the parameter defining the width of the distribution, instead of the deviation sigma or the variance sigma^2. The precision is normally defined as the reciprocal of the variance, 1/sigma^2.{{harvtxt |Bernardo |Smith |2000 |page=121 }} The formula for the distribution then becomes
f(x) = sqrt{fractau{2pi}} e^{-tau(x-mu)^2/2}.
This choice is claimed to have advantages in numerical computations when sigma is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution.Alternatively, the reciprocal of the standard deviation tau'=1/sigma might be defined as the precision, in which case the expression of the normal distribution becomes
f(x) = frac{tau'}{sqrt{2pi}} e^{-(tau')^2(x-mu)^2/2}.
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution.Normal distributions form an exponential family with natural parameters textstyletheta_1=frac{mu}{sigma^2} and textstyletheta_2=frac{-1}{2sigma^2}, and natural statistics x and x2. The dual expectation parameters for normal distribution are {{nowrap|1=η1 = μ}} and {{nowrap|1=η2 = μ2 + σ2}}.

Cumulative distribution function

The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter Phi (phi), is the integral
Phi(x) = frac 1 {sqrt{2pi}} int_{-infty}^x e^{-t^2/2} , dt

Error Function

The related error function operatorname{erf}(x) gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range [-x, x]. That is:
operatorname{erf}(x) = frac 1 {sqrtpi} int_{-x}^x e^{-t^2} , dt = frac 2 {sqrtpi} int_0^x e^{-t^2} , dt
These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more.The two functions are closely related, namely
Phi(x) = frac{1}{2} left[1 + operatorname{erf}left( frac x {sqrt 2} right) right]
For a generic normal distribution with density f, mean mu and deviation sigma, the cumulative distribution function is
F(x) = Phileft(frac{x-mu} sigma right) = frac{1}{2} left[1 + operatorname{erf}left(frac{x-mu}{sigma sqrt 2 }right)right]The complement of the standard normal cumulative distribution function, Q(x) = 1 - Phi(x), is often called the Q-function, especially in engineering texts.WEB,weblink Scott, Clayton, Robert, Nowak, The Q-function, Connexions, August 7, 2003, WEB,weblink Barak, Ohad, Q Function and Error Function, Tel Aviv University, April 6, 2006, dead,weblink" title="web.archive.org/web/20090325160012weblink">weblink March 25, 2009, mdy-all, It gives the probability that the value of a standard normal random variable X will exceed x: P(X>x). Other definitions of the Q-function, all of which are simple transformations of Phi, are also used occasionally.{{MathWorld |urlname=NormalDistributionFunction |title=Normal Distribution Function }}The graph of the standard normal cumulative distribution function Phi has 2-fold rotational symmetry around the point (0,1/2); that is, Phi(-x) = 1 - Phi(x). Its antiderivative (indefinite integral) can be expressed as follows:
int Phi(x), dx = xPhi(x) + varphi(x) + C.
The cumulative distribution function of the standard normal distribution can be expanded by Integration by parts into a series:
Phi(x)=frac{1}{2} + frac{1}{sqrt{2pi}}cdot e^{-x^2/2} left[x + frac{x^3}{3} + frac{x^5}{3cdot 5} + cdots + frac{x^{2n+1}}{(2n+1)!!} + cdotsright]
where !! denotes the double factorial.An asymptotic expansion of the cumulative distribution function for large x can also be derived using integration by parts. For more, see Error function#Asymptotic expansion.{{AS ref|26, eqn 26.2.12|932}}A quick approximation to the standard normal distribution's cumulative distribution function can be found by using a Taylor series approximation:
Phi(x) approx frac{1}{2}+frac{1}{sqrt{2pi}} sum_{k=0}^n frac{(-1)^k x^{(2k+1)}}{2^k k! (2k+1)}

Recursive computation with Taylor series expansion

The recursive nature of the e^{ax^2}family of derivatives may be used to easily construct a rapidly converging Taylor series expansion using recursive entries about any point of known value of the distribution,Phi(x_0):
Phi(x) = sum_{n=0}^infty frac{Phi^{(n)}(x_0)}{n!}(x-x_0)^n
where:
begin{align}
Phi^{(0)}(x_0) &= frac{1}{sqrt{2pi}}int_{-infty}^{x_0}e^{-t^2/2},dt Phi^{(1)}(x_0) &= frac{1}{sqrt{2pi}}e^{-x_0^2/2} Phi^{(n)}(x_0) &= -left(x_0Phi^{(n-1)}(x_0)+(n-2)Phi^{(n-2)}(x_0)right), & n geq 2end{align}

Using the Taylor series and Newton's method for the inverse function

An application for the above Taylor series expansion is to use Newton's method to reverse the computation. That is, if we have a value for the cumulative distribution function, Phi(x), but do not know the x needed to obtain the Phi(x), we can use Newton's method to find x, and use the Taylor series expansion above to minimize the number of computations. Newton's method is ideal to solve this problem because the first derivative of Phi(x), which is an integral of the normal standard distribution, is the normal standard distribution, and is readily available to use in the Newton's method solution.To solve, select a known approximate solution, x_0, to the desired Phi(x). x_0 may be a value from a distribution table, or an intelligent estimate followed by a computation of Phi(x_0) using any desired means to compute. Use this value of x_0 and the Taylor series expansion above to minimize computations.Repeat the following process until the difference between the computed Phi(x_{n}) and the desired Phi, which we will call Phi(text{desired}), is below a chosen acceptably small error, such as 10−5, 10−15, etc.:x_{n+1} = x_n - frac{Phi(x_n,x_0,Phi(x_0))-Phi(text{desired})}{Phi'(x_n)} where
Phi(x,x_0,Phi(x_0)) is the Phi(x) from a Taylor series solution using x_0 and Phi(x_0)
Phi'(x_n)=frac{1}{sqrt{2pi}}e^{-x_n^2/2}
When the repeated computations converge to an error below the chosen acceptably small value, x will be the value needed to obtain a Phi(x) of the desired value, Phi(text{desired}).

Standard deviation and coverage

{{Further|Interval estimation|Coverage probability}}(File:Standard deviation diagram.svg|thumb|350px|For the normal distribution, the values less than one standard deviation away from the mean account for 68.27% of the set; while two standard deviations from the mean account for 95.45%; and three standard deviations account for 99.73%.)About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 68–95–99.7 (empirical) rule, or the 3-sigma rule.More precisely, the probability that a normal deviate lies in the range between mu-nsigma and mu+nsigma is given by
F(mu+nsigma) - F(mu-nsigma) = Phi(n)-Phi(-n) = operatorname{erf} left(frac{n}{sqrt{2}}right).To 12 significant digits, the values for n=1,2,ldots , 6 are:{{citation needed|date=August 2022}}{| class="wikitable" style="text-align:center;margin-left:24pt" "! n !! p= F(mu+nsigma) - F(mu-nsigma) !! text{i.e. }1-p!! text{or }1text{ in }p !! OEIS
0.682689492137}} {{val|{| cellpadding="0" cellspacing="0" style="width: 16em;" {{val {{#invoke:Gapnum.15148718753}}
{{OEIS2C|A178647}}
0.954499736104}} {{val|{| cellpadding="0" cellspacing="0" style="width: 16em;" {{val {{#invoke:Gapnum.9778945080}}
{{OEIS2C|A110894}}
0.997300203937}} {{val|{| cellpadding="0" cellspacing="0" style="width: 16em;" {{val {{#invoke:Gapnum.398347345}}
{{OEIS2C|A270712}}
0.999936657516}} {{val|{| cellpadding="0" cellspacing="0" style="width: 16em;" {{val {{#invoke:Gapnum.1927673}}
0.999999426697}} {{val|{| cellpadding="0" cellspacing="0" style="width: 16em;" {{val {{#invoke:Gapnum.89362}}
0.999999998027}} {{val|{| cellpadding="0" cellspacing="0" style="width: 16em;" {{val {{#invoke:Gapnum.897}}
For large n, one can use the approximation 1 - p approx frac{e^{-n^2/2}}{nsqrt{pi/2}}.

Quantile function

{{Further|Quantile function#Normal distribution}}The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:
Phi^{-1}(p) = sqrt2operatorname{erf}^{-1}(2p - 1), quad pin(0,1).For a normal random variable with mean mu and variance sigma^2, the quantile function is
F^{-1}(p) = mu + sigmaPhi^{-1}(p)

mu + sigmasqrt 2 operatorname{erf}^{-1}(2p - 1), quad pin(0,1).

The quantile Phi^{-1}(p) of the standard normal distribution is commonly denoted as z_p. These values are used in hypothesis testing, construction of confidence intervals and Q–Q plots. A normal random variable X will exceed mu + z_psigma with probability 1-p, and will lie outside the interval mu pm z_psigma with probability 2(1-p). In particular, the quantile z_{0.975} is 1.96; therefore a normal random variable will lie outside the interval mu pm 1.96sigma in only 5% of cases.The following table gives the quantile z_p such that X will lie in the range mu pm z_psigma with a specified probability p. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions.BOOK, Vaart, A. W. van der,weblink Asymptotic Statistics, 1998-10-13, Cambridge University Press, 10.1017/cbo9780511802256, 978-0-511-80225-6, The following table shows sqrt 2 operatorname{erf}^{-1}(p)=Phi^{-1}left(frac{p+1}{2}right), not Phi^{-1}(p) as defined above.{| class="wikitable" style="text-align:left;margin-left:24pt;border:none;background:none;"! p !! z_p ! p !! z_p
1.281551565545}} 0.999 {{val|3.290526731492}}
1.644853626951}} 0.9999 {{val|3.890591886413}}
1.959963984540}} 0.99999 {{val|4.417173413469}}
2.326347874041}} 0.999999 {{val|4.891638475699}}
2.575829303549}} 0.9999999 {{val|5.326723886384}}
2.807033768344}} 0.99999999 {{val|5.730728868236}}
3.090232306168}} 0.999999999 {{val|6.109410204869}}
For small p, the quantile function has the useful asymptotic expansionPhi^{-1}(p)=-sqrt{lnfrac{1}{p^2}-lnlnfrac{1}{p^2}-ln(2pi)}+mathcal{o}(1).{{citation needed|date=February 2023}}

Properties

The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance.BOOK, Cover, Thomas M., Thomas, Joy A., 2006, Elements of Information Theory,weblink limited, John Wiley and Sons, 254, 9780471748816, JOURNAL, Park, Sung Y., Bera, Anil K., 2009, Maximum Entropy Autoregressive Conditional Heteroskedasticity Model, Journal of Econometrics, 219–230,weblink 2011-06-02, 10.1016/j.jeconom.2008.12.014, 150, 2, 10.1.1.511.9750, March 7, 2016,weblink" title="web.archive.org/web/20160307144515weblink">weblink dead, Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.Geary RC(1936) The distribution of the "Student's ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184Q, Q55897617, Lukacs, Eugene, Eugene Lukacs, The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution.The value of the normal distribution is practically zero when the value x lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied.The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution.

Symmetries and derivatives

The normal distribution with density f(x) (mean mu and standard deviation sigma > 0) has the following properties:
  • It is symmetric around the point x=mu, which is at the same time the mode, the median and the mean of the distribution.{{harvtxt |Patel |Read |1996 |loc=[2.1.4] }}
  • It is unimodal: its first derivative is positive for xmu, and zero only at x=mu.
  • The area bounded by the curve and the x-axis is unity (i.e. equal to one).
  • Its first derivative is f'(x)=-frac{x-mu}{sigma^2} f(x).
  • Its second derivative is f''(x) = frac{(x-mu)^2 - sigma^2}{sigma^4} f(x).
  • Its density has two inflection points (where the second derivative of f is zero and changes sign), located one standard deviation away from the mean, namely at x=mu-sigma and x=mu+sigma.
  • Its density is log-concave.
  • Its density is infinitely differentiable, indeed supersmooth of order 2.{{harvtxt |Fan |1991 |p=1258 }}
Furthermore, the density varphi of the standard normal distribution (i.e. mu=0 and sigma=1) also has the following properties:
  • Its first derivative is varphi'(x)=-xvarphi(x).
  • Its second derivative is varphi''(x)=(x^2-1)varphi(x)
  • More generally, its {{mvar|n}}th derivative is varphi^{(n)}(x) = (-1)^noperatorname{He}_n(x)varphi(x), where operatorname{He}_n(x) is the {{mvar|n}}th (probabilist) Hermite polynomial.{{harvtxt |Patel |Read |1996 |loc=[2.1.8] }}
  • The probability that a normally distributed variable X with known mu and sigma is in a particular set, can be calculated by using the fact that the fraction Z = (X-mu)/sigma has a standard normal distribution.

Moments

{{See also|List of integrals of Gaussian functions}}The plain and absolute moments of a variable X are the expected values of X^p and |X|^p, respectively. If the expected value mu of X is zero, these parameters are called central moments; otherwise, these parameters are called non-central moments. Usually we are interested only in moments with integer order p.If X has a normal distribution, the non-central moments exist and are finite for any p whose real part is greater than âˆ’1. For any non-negative integer p, the plain central moments are:BOOK, Papoulis, Athanasios, Probability, Random Variables and Stochastic Processes, 148, 4th,
operatorname{E}left[(X-mu)^pright] =
begin{cases}
0 & text{if }ptext{ is odd,}
sigma^p (p-1)!! & text{if }ptext{ is even.}
end{cases}
Here n!! denotes the double factorial, that is, the product of all numbers from n to 1 that have the same parity as n.The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer p,
begin{align}
operatorname{E}left[|X - mu|^pright] &= sigma^p (p-1)!! cdot begin{cases}
sqrt{frac{2}{pi}} & text{if }ptext{ is odd}
1 & text{if }ptext{ is even}
end{cases}
&= sigma^p cdot frac{2^{p/2}Gammaleft(frac{p+1} 2 right)}{sqrtpi}.
end{align}
The last formula is valid also for any non-integer p>-1. When the mean mu ne 0, the plain and absolute moments can be expressed in terms of confluent hypergeometric functions {}_1F_1 and U.ARXIV, Winkelbauer, Andreas, Moments and Absolute Moments of the Normal Distribution, 2012, math.ST, 1209.4340,
begin{align}
operatorname{E}left[X^pright] &= sigma^pcdot (-isqrt 2)^p Uleft(-frac{p}{2}, frac{1}{2}, -frac{1}{2} left( frac mu sigma right)^2 right),
operatorname{E}left[|X|^p right] &= sigma^p cdot 2^{p/2} frac {Gammaleft(frac{1+p} 2right)}{sqrtpi} {}_1F_1left( -frac{p}{2}, frac{1}{2}, -frac{1}{2} left( frac mu sigma right)^2 right).
end{align}
These expressions remain valid even if p is not an integer. See also generalized Hermite polynomials.{| class="wikitable" style="background:#fff; margin: auto;"! Order !! Non-central moment !! Central moment
| 1| mu| 0
| 2| mu^2+sigma^2| sigma^2
| 3| mu^3+3musigma^2| 0
| 4| mu^4+6mu^2sigma^2+3sigma^4| 3sigma^4
| 5| mu^5+10mu^3sigma^2+15musigma^4| 0
| 6| mu^6+15mu^4sigma^2+45mu^2sigma^4+15sigma^6| 15sigma^6
| 7| mu^7+21mu^5sigma^2+105mu^3sigma^4+105musigma^6| 0
| 8| mu^8+28mu^6sigma^2+210mu^4sigma^4+420mu^2sigma^6+105sigma^8| 105sigma^8
The expectation of X conditioned on the event that X lies in an interval [a,b] is given by
operatorname{E}left[X mid a
The update equations can be derived, and look as follows:
0 with the absolute error {{math|{{abs|ε(x)}} < 7.5·10−8}} (algorithm 26.2.17):
Phi(x) = 1 - varphi(x)left(b_1 t + b_2 t^2 + b_3t^3 + b_4 t^4 + b_5 t^5right) + varepsilon(x), qquad t = frac{1}{1+b_0x},
where ϕ(x) is the standard normal probability density function, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.
  • {{harvtxt |Hart |1968 }} lists some dozens of approximations – by means of rational functions, with or without exponentials – for the {{mono|erfc()}} function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by {{harvtxt |West |2009 }} combines Hart's algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with a 16-digit precision.
  • {{harvtxt |Cody |1969 }} after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation.
  • {{harvtxt |Marsaglia |2004 }} suggested a simple algorithm{{NoteTag|For example, this algorithm is given in the article Bc programming language.}} based on the Taylor series expansion


Phi(x) = frac12 + varphi(x)left( x + frac{x^3} 3 + frac{x^5}{3 cdot 5} + frac{x^7}{3 cdot 5 cdot 7} + frac{x^9}{3 cdot 5 cdot 7 cdot 9} + cdots right)
for calculating {{math|Φ(x)}} with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when {{math|1=x = 10}}).
  • The GNU Scientific Library calculates values of the standard normal cumulative distribution function using Hart's algorithms and approximations with Chebyshev polynomials.
  • {{harvtxt |Dia|2023 }} proposes the following approximation of 1-Phi with a maximum relative error less than 2^{-53}
left(approx 1.1 times 10^{-16}right)
in absolute value: for x ge 0
begin{aligned}
1-Phileft(xright) & = left(frac{0.39894228040143268}{x+2.92678600515804815}right)
left(frac{x^2+8.42742300458043240 x+18.38871225773938487}{x^2+5.81582518933527391 x+8.97280659046817350} right)
& left(frac{x^2+7.30756258553673541 x+18.25323235347346525}{x^2+5.70347935898051437 x+10.27157061171363079}right)
left(frac{x^2+5.66479518878470765 x+18.61193318971775795}{x^2+5.51862483025707963 x+12.72323261907760928}right)
& left( frac{x^2+4.91396098895240075 x+24.14804072812762821}{x^2+5.26184239579604207 x+16.88639562007936908}right)
left( frac{x^2+3.83362947800146179 x+11.61511226260603247}{x^2+4.92081346632882033 x+24.12333774572479110}right) e^{-frac{x^2}{2}}
end{aligned}
and for
x
where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares method.

Naming

Today, the concept is usually known in English as the normal distribution or Gaussian distribution. Other less common names include Gauss distribution, Laplace-Gauss distribution, the law of error, the law of facility of errors, Laplace's second law, and Gaussian law.Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than usual.

- content above as imported from Wikipedia
- "Normal distribution" does not exist on GetWiki (yet)
- time: 2:31pm EDT - Wed, May 15 2024
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 23 MAY 2022
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
CONNECT