# GetWiki

*interpolation*

ARTICLE SUBJECTS

being →

database →

ethics →

fiction →

history →

internet →

language →

linux →

logic →

method →

news →

policy →

purpose →

religion →

science →

software →

truth →

unix →

wiki →

ARTICLE TYPES

essay →

feed →

help →

system →

wiki →

ARTICLE ORIGINS

critical →

forked →

imported →

original →

interpolation

[ temporary import ]

**please note:**

- the content below is remote from Wikipedia

- it has been imported raw for GetWiki

**interpolation**is a method of constructing new data points within the range of a discrete set of known data points.In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to

**interpolate**, i.e., estimate the value of that function for an intermediate value of the independent variable. A closely related problem is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error.File:Splined epitrochoid.svg|300px|thumb|An interpolation of a finite set of points on an epitrochoid. The points in red are connected by blue interpolated spline curves deduced only from the red points. The interpolated curves have polynomial formulas much simpler than that of the original epitrochoid curve.]]

## Example

This table gives some values of an unknown function f(x).(File:Interpolation Data.svg|right|thumb|230px|Plot of the data points as given in the table.){| cellpadding=0 cellspacing=0! x | !colspan=3 align=center| f(x) | ||||

0 | |||||

0 | . | 8415 | |||

0 | . | 9093 | |||

0 | . | 1411 | |||

−0 | . | 7568 | |||

−0 | . | 9589 | |||

−0 | . | 2794 |

### Piecewise constant interpolation

File:Piecewise constant.svg|thumb|right|Piecewise constant interpolation, or nearest-neighbor interpolationnearest-neighbor interpolation{{details|Nearest-neighbor interpolation}}The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, as linear interpolation (see below) is almost as easy, but in higher-dimensional multivariate interpolation, this could be a favourable choice for its speed and simplicity.{{clear}}### Linear interpolation

(File:Interpolation example linear.svg|right|thumb|230px|Plot of the data with linear interpolation superimposed)One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of estimating*f*(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to take

*f*(2.5) midway between

*f*(2) = 0.9093 and

*f*(3) = 0.1411, which yields 0.5252.Generally, linear interpolation takes two data points, say (

*x*

**'a****,**

*y**'a*) and (

*x*

**'b****,**

*y**'b*), and the interpolant is given by:

y = y_a + left( y_b-y_a right) frac{x-x_a}{x_b-x_a} text{ at the point } left( x,y right)

frac{y-y_a}{y_b-y_a} = frac{x-x_a}{x_b-x_a}

frac{y-y_a}{x-x_a} = frac{y_b-y_a}{x_b-x_a}

This previous equation states that the slope of the new line between (x_a,y_a) and (x,y) is the same as the slope of the line between (x_a,y_a) and (x_b,y_b) Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is not differentiable at the point *x*

*k*.The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by

*g*, and suppose that

*x*lies between

*x*

**'a****and**

*x**'b*and that

*g*is twice continuously differentiable. Then the linear interpolation error is

|f(x)-g(x)| le C(x_b-x_a)^2 quadtext{where}quad C = frac18 max_{rin[x_a,x_b]} |g''(r)|.

In words, the error is proportional to the square of the distance between the data points. The error in some other methods, including polynomial interpolation and spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants.{{clear}}### Polynomial interpolation

(File:Interpolation example polynomial.svg|right|thumb|230px|Plot of the data with polynomial interpolation applied)Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree.Consider again the problem given above. The following sixth degree polynomial goes through all the seven points:
f(x) = -0.0001521 x^6 - 0.003130 x^5 + 0.07321 x^4 - 0.3577 x^3 + 0.2255 x^2 + 0.9038 x.

Substituting *x*= 2.5, we find that

*f*(2.5) = 0.5965.Generally, if we have

*n*data points, there is exactly one polynomial of degree at most

*n*−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the power

*n*. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation overcomes most of the problems of linear interpolation.However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (see computational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially at the end points (see Runge's phenomenon).Polynomial interpolation can estimate local maxima and minima that are outside the range of the samples, unlike linear interpolation. For example, the interpolant above has a local maximum at

*x*â‰ˆ 1.566,

*f*(

*x*) â‰ˆ 1.003 and a local minimum at

*x*â‰ˆ 4.708,

*f*(

*x*) â‰ˆ âˆ’1.003. However, these maxima and minima may exceed the theoretical range of the functionâ€”for example, a function that is always positive may have an interpolant with negative values, and whose inverse therefore contains false vertical asymptotes.More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense, i.e. to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention to Chebyshev polynomials.{{clear}}

### Spline interpolation

(File:Interpolation example spline.svg|right|thumb|230px|Plot of the data with spline interpolation applied)Remember that linear interpolation uses a linear function for each of intervals [*x*

**'k****,**

*x**'k+1*]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline.For instance, the natural cubic spline is piecewise cubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by

f(x) = begin{cases}

-0.1522 x^3 + 0.9937 x, & text{if } x in [0,1], -0.01258 x^3 - 0.4189 x^2 + 1.4126 x - 0.1396, & text{if } x in [1,2], -0.1871 x^3 + 3.3673 x^2 - 19.3370 x + 34.9282, & text{if } x in [5,6].end{cases} In this case we get *f*(2.5) = 0.5972.Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation and the interpolant is smoother. However, the interpolant is easier to evaluate than the high-degree polynomials used in polynomial interpolation. However, the global nature of the basis functions leads to ill-conditioning. This is completely mitigated by using splines of compact support, such as are implemented in Boost.Math and discussed in Kress.BOOK, Kress, Rainer, Numerical Analysis, 1998, {{Clear}}

## Function approximation

Interpolation is a common way to approximate functions. Given a function f:[a,b] to mathbb{R} with a set of points x_1, x_2, dots, x_n in [a, b] one can form a function s: [a,b] to mathbb{R} such that f(x_i)=s(x_i) for i=1, 2, dots, n (that is that s interpolates f at these points). In general, an interpolant need not be a good approximation, but there are well known and often reasonable conditions where it will. For example, if fin C^4([a,b]) (four times continuously differentiable) then cubic spline interpolation has an error bound given by |f-s|_infty leq C |f^{(4)}|_infty h^4 where h max_{i=1,2, dots, n-1} |x_{i+1}-x_i| and C is a constant.JOURNAL, Hall, Charles A., Meyer, Weston W., Optimal Error Bounds for Cubic Spline Interpolation, Journal of Approximation Theory, 1976, 16, 2, 105-122,weblink 4 April 2019,## Via Gaussian processes

Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression, i.e., for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known as Kriging.## Other forms

Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is**interpolation**by rational functions using PadÃ© approximant, and trigonometric interpolation is interpolation by trigonometric polynomials using Fourier series. Another possibility is to use wavelets.The Whittakerâ€“Shannon interpolation formula can be used if the number of data points is infinite.Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads to Hermite interpolation problems.When each data point is itself a function, it can be useful to see the interpolation problem as a partial advection problem between each data point. This idea leads to the displacement interpolation problem used in transportation theory.

## In higher dimensions

{{comparison_of_1D_and_2D_interpolation.svg|250px|}}Multivariate interpolation is the interpolation of functions of more than one variable. Methods include bilinear interpolation and bicubic interpolation in two dimensions, and trilinear interpolation in three dimensions.They can be applied to gridded or scattered data.Image:Nearest2DInterpolExample.png|Nearest neighborImage:BilinearInterpolExample.png|BilinearImage:BicubicInterpolationExample.png|Bicubic## In digital signal processing

In the domain of digital signal processing, the term interpolation refers to the process of converting a sampled digital signal (such as a sampled audio signal) to that of a higher sampling rate (Upsampling) using various digital filtering techniques (e.g., convolution with a frequency-limited impulse signal). In this application there is a specific requirement that the harmonic content of the original signal be preserved without creating aliased harmonic content of the original signal above the original Nyquist limit of the signal (i.e., above fs/2 of the original signal sample rate). An early and fairly elementary discussion on this subject can be found in Rabiner and Crochiere's book*Multirate Digital Signal Processing*.R.E. Crochiere and L.R. Rabiner. (1983). Multirate Digital Signal Processing. Englewood Cliffs, NJ: Prenticeâ€“Hall.

## Related concepts

The term*extrapolation*is used to find data points outside the range of known data points.In curve fitting problems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads to least squares approximation.Approximation theory studies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function.

## Generalization

If we consider x as a variable in a topological space, with and the function f(x) mapping to a Banach space, then the problem is treated as "interpolation of operators".Colin Bennett, Robert C. Sharpley,*Interpolation of Operators*, Academic Press 1988 The classical results about interpolation of operators are the Rieszâ€“Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results.

## See also

{{Div col|colwidth=30em}}- Barycentric coordinates â€“ for interpolating within on a triangle or tetrahedron
- Bilinear interpolation
- Brahmagupta's interpolation formula
- Extrapolation
- Fractal interpolation
- Imputation (statistics)
- Lagrange interpolation
- Missing data
- Multivariate interpolation
- Newtonâ€“Cotes formulas
- Polynomial interpolation
- Radial basis function interpolation
- Simple rational approximation

## References

## External links

- Online tools for linear, quadratic, cubic spline, and polynomial interpolation with visualisation and JavaScript source code.
- Sol Tutorials - Interpolation Tricks
- Compactly Supported Cubic B-Spline interpolation in Boost.Math
- Barycentric rational interpolation in Boost.Math
- Interpolation via the Chebyshev transform in Boost.Math

**- content above as imported from Wikipedia**

- "

- time: 4:42am EDT - Thu, Jun 27 2019

- "

__interpolation__" does not exist on GetWiki (yet)- time: 4:42am EDT - Thu, Jun 27 2019

[ this remote article is provided by Wikipedia ]

LATEST EDITS [ see all ]

GETWIKI 09 MAY 2016

GETWIKI 18 OCT 2015

GETWIKI 20 AUG 2014

GETWIKI 19 AUG 2014

GETWIKI 18 AUG 2014

© 2019 M.R.M. PARROTT | ALL RIGHTS RESERVED