SUPPORT THE WORK

GetWiki

sampling distribution

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
sampling distribution
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Short description|Probability distribution of the possible sample outcomes}}{{Use dmy dates|date=September 2015}}In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations (data points), were separately used in order to compute one value of a statistic (such as, for example, the sample mean or sample variance) for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically.Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference. More specifically, they allow analytical considerations to be based on the probability distribution of a statistic, rather than on the joint probability distribution of all the individual sample values.

Introduction

The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given sample size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, the sampling procedure employed, and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an asymptotic distribution, which corresponds to the limiting case either as the number of random samples of finite size, taken from an infinite population and used to produce the distribution, tends to infinity, or when just one equally-infinite-size "sample" is taken of that same population.For example, consider a normal population with mean mu and variance sigma^2. Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean bar x for each sample – this statistic is called the sample mean. The distribution of these means, or averages, is called the "sampling distribution of the sample mean". This distribution is normal mathcal{N}(mu, sigma^2/n) (n is the sample size) since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes).The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are more complicated, and often they do not exist in closed-form. In such cases the sampling distributions may be approximated through Monte-Carlo simulations,BOOK, Mooney, Christopher Z., Monte Carlo simulation, 1999, Sage, Thousand Oaks, Calif., 9780803959439,weblink 2, bootstrap methods, or asymptotic distribution theory.

Standard error

The standard deviation of the sampling distribution of a statistic is referred to as thestandard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:sigma_{bar x} = frac{sigma}{sqrt{n}}where sigma is the standard deviation of the population distribution of that quantity and n is the sample size (number of items in the sample).An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost–benefit tradeoffs.For the case where the statistic is the sample total, and samples are uncorrelated, the standard error is:sigma_{Sigma x} = sigmasqrt{n}where, again, sigma is the standard deviation of the population distribution of that quantity and n is the sample size (number of items in the sample).">

Examples{| class"wikitable"

! Population || Statistic || Sampling distribution
Normal distribution>Normal: mathcal{N}(mu, sigma^2) | Sample mean bar X from samples of size n| bar X sim mathcal{N}Big(mu,, frac{sigma^2}{n} Big). If the standard deviation sigma is not known, one can consider T = left(bar{X} - muright) frac{sqrt{n}}{S} , which follows the Student's t-distribution with nu = n - 1 degrees of freedom. Here S^2 is the sample variance, and T is a pivotal quantity, whose distribution does not depend on sigma.
Bernoulli distribution>Bernoulli: operatorname{Bernoulli}(p)| Sample proportion of "successful trials" bar XBinomial distribution>n bar X sim operatorname{Binomial}(n, p)
| Two independent normal populations:mathcal{N}(mu_1, sigma_1^2)  and  mathcal{N}(mu_2, sigma_2^2)| Difference between sample means, bar X_1 - bar X_2| bar X_1 - bar X_2 sim mathcal{N}! left(mu_1 - mu_2,, frac{sigma_1^2}{n_1} + frac{sigma_2^2}{n_2} right)
| Any absolutely continuous distribution F with density f| Median X_{(k)} from a sample of size n = 2k − 1, where sample is ordered X_{(1)} to X_{(n)}| f_{X_{(k)}}(x) = frac{(2k-1)!}{(k-1)!^2}f(x)Big(F(x)(1-F(x))Big)^{k-1}
| Any distribution with distribution function F | Maximum M=max X_k from a random sample of size n| F_M(x) = P(Mle x) = prod P(X_kle x)= left(F(x)right)^n

References

{{reflist}}

External links

{{Statistics|inference}}{{Authority control}}

- content above as imported from Wikipedia
- "sampling distribution" does not exist on GetWiki (yet)
- time: 5:56pm EDT - Wed, May 01 2024
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 23 MAY 2022
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
CONNECT