Conway-Maxwell-Poisson distribution

In probability theory and statistics, the Poisson distribution is the most common distribution used to model count data. However, due to the Poisson distribution's reliance on a single parameter, the mean, data exhibiting overdispersion or underdispersion under the Poisson model must be modeled under alternative distributions to account for these statistically significant deviations. Typically, the negative binomial distribution is used to model data with over-dispersion, however the Conway-Maxwell-Poisson (CMP) distribution provides an improved, yet relatively unknown, alternative. In particular, the CMP distribution is suitable for over and under-dispersed data, and is a member of the exponential family, which has many favorable properties.

Conway-Maxwell-Poisson distribution
The CMP distribution was originally proposed by Conway and Maxwell in 1962 as a solution to handling queueing systems with state-dependent service rates. The probabilistic and statistical properties of the distribution were published by Shmueli et al. (2005).

CMP is defined to be the distribution with probability mass function



\Pr(X = x) = f(x; \lambda, \nu) = \frac{\lambda^x}{(x!)^\nu}\frac{1}{Z(\lambda,\nu)}, $$ for x = 0,1,2,... , $$\lambda > 0$$ and $$\nu$$ ≥ 0, where



Z(\lambda,\nu) = \sum_{j=0}^\infty \frac{\lambda^j}{(j!)^\nu}. $$

The function $$Z(\lambda,\nu)$$ serves as a normalization constant so the probability mass function sums to one. Note that $$Z(\lambda,\nu)$$ does not have a closed form.

The additional parameter $$\nu$$ which does not appear in the Poisson distribution allows for adjustment of the rate of decay. This rate of decay is a non-linear decrease in ratios of successive probabilities, specifically



\frac{\Pr(X = x-1)}{\Pr(X = x)} = \frac{x^\nu}{\lambda}. $$

When $$\nu = 1$$, the CMP distribution becomes the standard Poisson distribution and as $$\nu \to \infty$$, the distribution approaches a Bernoulli distribution with parameter $$\lambda/(1+\lambda)$$. When $$\nu=0$$ the CMP distribution reduces to a geometric distribution with probability of success $$1-\lambda (\lambda<1)$$.

For the CMP distribution, moments can be found through the recursive formula



\operatorname{E}[X^{r+1}] = \begin{cases} \lambda \, \operatorname{E}[X+1]^{1-\nu} & \text{ if } r = 0 \\ \lambda \, \frac{d}{d\lambda}\operatorname{E}[X^r] + \operatorname{E}[X]\operatorname{E}[X^r] & \text{ if } r > 0. \\                     \end{cases} $$

Parameter estimation
There are a few methods of estimating the parameters of the CMP distribution from the data. Two methods will be discussed, the "quick and crude method" and the "accurate and intensive method".

Quick and crude method: weighted least squares
The "quick and crude method" provides a simple, efficient method to derive rough estimates of the parameters of the CMP distribution and determine if the distribution would be an appropriate model. Following the use of this method, an alternative method should be employed to compute more accurate estimates of the parameters if the model is deemed appropriate.

This method uses the relationship of successive probabilities as discussed above. By taking logarithms of both sides of this equation, the following linear relationship arises



\log \frac{p_{x-1}}{p_x} = - \log \lambda + \nu \log x $$

where $$p_x$$ denotes $$\mathbb{P}(X = x)$$. When estimating the parameters, the probabilities can be replaced by the relative frequencies of $$x$$ and $$x-1$$. To determine if the CMP distribution is an appropriate model, these values should be plotted against $$\log x$$ for all ratios without zero counts. If the data appear to be linear, then the model is likely to be a good fit.

Once the appropriateness of the model is determined, the parameters can be estimated by fitting a regression of $$\log (\hat p_{x-1} / \hat p_x)$$ on $$\log x$$. However, the basic assumption of homoscedasticity is violated, so a weighted least squares regression must be used. The inverse weight matrix will have the variances of each ratio on the diagonal with the one-step covariances on the first off-diagonal, both given below.



\mathbb{V}\left[\log \frac{\hat p_{x-1}}{\hat p_x}\right] \approx \frac{1}{np_x} + \frac{1}{np_{x-1}} $$

\text{cov}\left(\log \frac{\hat p_{x-1}}{\hat p_x}, \log \frac{\hat p_x}{\hat p_{x+1}} \right) \approx - \frac{1}{np_x} $$

Accurate and intensive method: maximum likelihood
The CMP likelihood function is



\mathcal{L}(\lambda,\nu|x_1,...,x_n) = \lambda^{S_1} \exp(-\nu S_2) Z^{-n}(\lambda, \nu) $$

where $$S_1 = \sum_{i=1}^n x_i$$ and $$S_2 = \sum_{i=1}^n \log x_i!$$. Maximizing the likelihood yields the following two equations



\mathbb{E}[X] = \bar X $$

\mathbb{E}[\log X!] = \overline{\log X!} $$

which do not have an analytic solution.

Instead, the maximum likelihood estimates are approximated numerically by the Newton-Raphson method. In each iteration, the expectations, variances, and covariance of $$X$$ and $$\log X!$$ are approximated by using the estimates for $$\lambda$$ and $$\nu$$ from the previous iteration in the expression



\mathbb{E}[f(x)] = \sum_{j=0}^\infty f(j) \frac{\lambda^j}{(j!)^\nu Z(\lambda, \nu)}. $$

This is continued until convergence of $$\hat\lambda$$ and $$\hat\nu$$.

Generalized Linear Model
The basic Conway-Maxwell Poisson (COM-Poisson) distribution discussed above has also been used as the basis for a generalized linear model (GLM) using a Bayesian formulation. Guikema and Coffelt (2008) developed a dual-link GLM based on the Conway-Maxwell Poisson distribution, and Lord et al. (2008) used this model evaluate traffic accident data. The COM-Poisson GLM developed by Guikema and Coffelt (2008) is based on a reformulation of the COM-Poisson distribution above, replacing $$\lambda$$ with $$\mu=\lambda^{1/\nu}$$. The integral part of $$\mu$$ is then the mode of the distribution. Guikema and Coffelt (2008) and Lord et al. (2008) used a full Bayesian estimation approach with MCMC sampling implemented in WinBugs with non-informative priors for the regression parameters. This approach is computationally expensive, but it yields the full posterior distributions for the regression parameters and allows expert knowledge to be incorporated through the use of informative priors.

Galit Shmueli (University of Maryland) and Kimberly Sellers (Georgetown University) developed a classical GLM formulation for a CMP regression which generalizes Poisson regression and logistic regression. They take advantage of the exponential family properties of the CMP distribution to obtain elegant model estimation (via maximum-likelihood), inference, and interpretation. This approach requires substantially less computational time than the Bayesian approach of Guikema and Coffelt (2008), at the cost of not allowing expert knowledge to be incorporated into the model. The Sellers-Shmueli formulation yields standard errors for the regression parameters (via the Fisher Information matrix) compared to the full posterior distributions obtainable via the Guikema and Coffelt (2008) Bayesian formulation. It also devises a statistical test for testing for the level of dispersion compared to a Poisson model. Code for fitting a CMP-regression and testing for dispersion is available at http://www9.georgetown.edu/faculty/kfs7/research.

The two GLM frameworks developed for the COM-Poisson distribution significantly extend the usefulness of this distribution for data analysis problems.