Gaussian quadrature

In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for more on quadrature rules.) An n-point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the points xi and weights wi for i = 1,...,n. The domain of integration for such a rule is conventionally taken as [−1, 1], so the rule is stated as


 * $$\int_{-1}^1 f(x)\,dx \approx \sum_{i=1}^n w_i f(x_i).$$

It can be shown (see Press, et al., or Stoer and Bulirsch) that the evaluation points are just the roots of a polynomial belonging to a class of orthogonal polynomials.

Rules for the basic problem
For the integration problem stated above, the associated polynomials are Legendre polynomials, Pn(x). With the nth polynomial normalized to give Pn(1) = 1, the ith Gauss node, xi, is the ith root of Pn; its weight is given by
 * $$ w_i = \frac{2}{\left( 1-x_i^2 \right) (P'_n(x_i))^2} \,\!$$

Some low-order rules for solving the integration problem are listed below.

Change of interval for Gaussian quadrature
An integral over [a, b] must be changed into an integral over [−1, 1] before applying the Gaussian quadrature rule. This change of interval can be done in the following way:



\int_a^b f(x)\,dx = \frac{b-a}{2} \int_{-1}^1 f\left(\frac{b-a}{2}x + \frac{a+b}{2}\right)\,dx $$

After applying the Gaussian quadrature rule, the following approximation is obtained:



\frac{b-a}{2} \sum_{i=1}^n w_i f\left(\frac{b-a}{2}x_i + \frac{a+b}{2}\right) $$

Other forms of Gaussian quadrature
The integration problem can be expressed in a slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than [−1, 1]. That is, the problem is to calculate


 * $$ \int_a^b \omega(x)\,f(x)\,dx $$

for some choices of a, b, and &omega;. For a = −1, b = 1, and ω(x) = 1, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).

Fundamental theorem
Let $$p_n$$ be a nontrivial polynomial of degree n such that



\int_a^b \omega(x) \, x^k p_n(x) \, dx = 0, \quad \text{for all }k=0,1,\ldots,n-1. $$

If we pick the nodes to be the zeros of $$p_n$$, then there exist weights wi which make the computed integral exact for all polynomials of degree 2n − 1 or less. Furthermore, all these nodes will lie in the open interval (a, b).

The polynomial $$p_n$$ is said to be an orthogonal polynomial of degree n associated to the weight function $$\omega (x)$$. It is unique up to a constant normalization factor.

Computation of Gauss quadrature rules
For computing the nodes $$x_i$$ and weights $$w_i$$ of Gaussian quadrature rules, the fundamental tool is the three-term recurrence relation satisfied by the set of orthogonal polynomials associated to the corresponding weight function.

If, for instance, $$p_n$$ is the monic orthogonal polynomial of degree n (the orthogonal polynomial of degree n with the highest degree coefficient equal to one), one can show that such orthogonal polynomials are related through the formula


 * $$p_{n+1}(x)+(B_n-x)p_n (x)+A_n p_{n-1}(x)=0, \qquad n=1,2,\ldots$$

From this, nodes and weights can be computed from the eigenvalues and eigenvectors of an associated linear algebra problem. This is usually named as the Golub–Welsch algorithm.

The starting idea comes from the observation that, if $$x_i$$ is a root of the orthogonal polynomial $$p_n$$ then, using the previous recurrence formula for $$k=0,1,\ldots, n-1$$ and because $$p_n (x_j)=0$$, we have

$$ J\tilde{P}=x_j \tilde{P} $$

where $$ \tilde{P}=[p_0 (x_j),p_1 (x_j),...,p_{n-1}(x_j)]^{T} $$

and $$J$$ is the so-called Jacobi matrix:

$$ \mathbf{J}=\left( \begin{array}{llllll} B_0     & 1       & 0      & \ldots  & \ldots  & \ldots\\ A_1      & B_1     & 1      & 0       & \ldots  & \ldots \\ 0        & A_2     & B_2    & 1       & 0       & \ldots \\ \ldots   & \ldots  & \ldots & \ldots  & \ldots  & \ldots \\ \ldots   & \ldots  & \ldots & A_{n-2}  & B_{n-2}  & 1 \\ \ldots   & \ldots  & \ldots & \ldots  & A_{n-1}  &   B_{n-1} \end{array} \right) $$

The nodes of gaussian quadrature can therefore be computed as the eigenvalues of a tridiagonal matrix.

For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix $$\mathcal{J}$$ with elements $$\mathcal{J}_{i,i}=J_{i,i}$$, $$i=1,\ldots,n$$ and $$\mathcal{J}_{i-1,i}=\mathcal{J}_{i,i-1}=\sqrt{J_{i,i-1}J_{i-1,i}},\, i=2,\ldots,n$$. $$\mathbf{J}$$ and $$\mathcal{J}$$ are equivalent and therefore have the same eigenvalues (the nodes). The weights can be computed from the matrix $$J$$. If $$\phi^{(j)}$$ is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated to the eigenvalue $$x_j$$, the corresponding weight can be computed from the first component of this eigenvector, namely:

$$ w_j=\mu_0 \left(\phi_1^{(j)}\right)^2 $$

where $$\mu_0$$ is the integral of the weight function

$$ \mu_0=\int_a^b w(x) dx $$

See, for instance, for further details.

Error estimates
The error of a Gaussian quadrature rule can be stated as follows. For an integrand which has 2n continuous derivatives,


 * $$ \int_a^b \omega(x)\,f(x)\,dx - \sum_{i=1}^n w_i\,f(x_i)

= \frac{f^{(2n)}(\xi)}{(2n)!} \, (p_n,p_n) $$

for some ξ in (a, b), where pn is the orthogonal polynomial of order n and where


 * $$ (f,g) = \int_a^b \omega(x) f(x) g(x) \, dx . \,\!$$

In the important special case of &omega;(x) = 1, we have the error estimate


 * $$ \frac{(b-a)^{2n+1} (n!)^4}{(2n+1)[(2n)!]^3} f^{(2n)} (\xi), \qquad a < \xi < b . \,\!$$

Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order 2n derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.

Gauss–Kronrod rules
If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding $$n+1$$ points to an $$n$$-point rule in such a way that the resulting rule is of order $$3n+1$$. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension are often used as an estimate of the approximation error. The rules are named after Alexander Kronrod who invented them in the 1960s. The algorithms in QUADPACK (see below) are based on Gauss–Kronrod rules.

A popular example combines a 7-point Gauss rule with a 15-point Kronrod rule. Because the Gauss points are incorporated into the Kronrod points, a total of only 15 function evaluations yields both a quadrature estimate and an error estimate.


 * {| class="wikitable" style="background-color:white"

! Gauss nodes !! !! Weights ! Kronrod nodes !! !! Weights
 * + (G7,K15) on [−1,1]
 * ±0.94910 79123 42759 || &lowast; || 0.12948 49661 68870
 * ±0.74153 11855 99394 || &lowast; || 0.27970 53914 89277
 * ±0.40584 51513 77397 || &lowast; || 0.38183 00505 05119
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.41795 91836 73469
 * ±0.40584 51513 77397 || &lowast; || 0.38183 00505 05119
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.41795 91836 73469
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.41795 91836 73469
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.41795 91836 73469
 * ±0.99145 53711 20813 || || 0.02293 53220 10529
 * ±0.94910 79123 42759 || &lowast; || 0.06309 20926 29979
 * ±0.86486 44233 59769 || || 0.10479 00103 22250
 * ±0.74153 11855 99394 || &lowast; || 0.14065 32597 15525
 * ±0.58608 72354 67691 || || 0.16900 47266 39267
 * ±0.40584 51513 77397 || &lowast; || 0.19035 05780 64785
 * ±0.20778 49550 07898 || || 0.20443 29400 75298
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.20948 21410 84728
 * }
 * ±0.58608 72354 67691 || || 0.16900 47266 39267
 * ±0.40584 51513 77397 || &lowast; || 0.19035 05780 64785
 * ±0.20778 49550 07898 || || 0.20443 29400 75298
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.20948 21410 84728
 * }
 * style="text-align:right" | 0.00000 00000 00000 || &lowast; || 0.20948 21410 84728
 * }
 * }

Patterson showed how to find further extensions of this type.