Zero-truncated negative binomial regression

Zero-truncated negative binomial regression is used to model count data for which the value zero cannot occur and for which overdispersion exists.

require(foreign)
## Loading required package: foreign
require(ggplot2)
## Loading required package: ggplot2
require(VGAM)
## Loading required package: VGAM
## Loading required package: stats4
## Loading required package: splines
require(boot)
## Loading required package: boot
## 
## Attaching package: 'boot'
## 
## The following objects are masked from 'package:VGAM':
## 
##     logit, simplex

Examples of zero-truncated negative binomial regression

Length of hospital stay data

Let's pursue Example 1 from above.

We have a hypothetical data file, ztp.csv with 1,493 observations. The length of hospital stay variable is stay. The variable age gives the age group from 1 to 9 which will be treated as interval in this example. The variables hmo and died are binary indicator variables for HMO insured patients and patients who died while in the hospital, respectively.

Let's look at the data.

dat <- read.csv("http://www.karlin.mff.cuni.cz/~pesta/prednasky/NMFM404/Data/ztp.csv")

dat <- within(dat, {
    hmo <- factor(hmo)
    died <- factor(died)
})

summary(dat)
##       stay             age        hmo      died   
##  Min.   : 1.000   Min.   :1.000   0:1254   0:981  
##  1st Qu.: 4.000   1st Qu.:4.000   1: 239   1:512  
##  Median : 8.000   Median :5.000                   
##  Mean   : 9.729   Mean   :5.234                   
##  3rd Qu.:13.000   3rd Qu.:6.000                   
##  Max.   :74.000   Max.   :9.000

Now let's look at some graphs of the data conditional on various combinations of the variables to get a sense of how the variables work together. We will use the ggplot2 package. First we can look at histograms of stay broken down by hmo on the rows and died on the columns. We also include the marginal distributions, thus the lower right corner represents the overall histogram. We use a log base 10 scale to approximate the canonical link function of the poisson distribution (natural logarithm).

ggplot(dat, aes(stay)) +
  geom_histogram() +
  scale_x_log10() +
  facet_grid(hmo ~ died, margins=TRUE, scales="free_y")
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.

plot of chunk ztnb-hist
From the histograms, it looks like the density of the distribution, does vary across levels of hmo and died, with shorter stays for those in HMOs (1) and shorter for those who did die, including what seems to be an inflated number of 1 day stays. To examine how stay varies across age groups, we can use conditional violin plots which show a kernel density estimate of the distribution of stay mirrored (hence the violin) and conditional on each age group. To further understand the raw data going into each density estimate, we add raw data on top of the violin plots with a small amount of random noise (jitter) to alleviate over plotting. Finally, to get a sense of the overall trend, we add a locally weighted regression line.

ggplot(dat, aes(factor(age), stay)) +
  geom_violin() +
  geom_jitter(size=1.5) +
  scale_y_log10() +
  stat_smooth(aes(x = age, y = stay, group=1), method="loess")

plot of chunk ztnb-jitter
The distribution of length of stay does not seem to vary much across age groups. This observation from the raw data is corroborated by the relatively flat loess line. Finally let's look at the proportion of people who lived or died across age groups by whether or not they are in HMOs.

ggplot(dat, aes(age, fill=died)) +
  geom_histogram(binwidth=.5, position="fill") +
  facet_grid(hmo ~ ., margins=TRUE)

plot of chunk ztnb-prop
For the lowest ages, a smaller proportion of people in HMOs died, but for higher ages, there does not seem to be a huge difference, with a slightly higher proportion in HMOs dying if anything. Overall, as age group increases, the proportion of those dying increases, as expected.

Analysis methods you might consider

Below is a list of some analysis methods you may have encountered. Some of the methods listed are quite reasonable while others have either fallen out of favor or have limitations.

ZTNB regression model

To fit the zero-truncated negative binomial model, we use the vglm function in the VGAM package. This function fits a very flexible class of models called vector generalized linear models to a wide range of assumed distributions. In our case, we believe the data come from the negative binomial distribution, but without zeros. Thus the values are strictly positive poisson, for which we use the positive negative binomial family via the posnegbinomial function passed to vglm.

m1 <- vglm(stay ~ age + hmo + died, family = posnegbinomial(), data = dat)
summary(m1)
## 
## Call:
## vglm(formula = stay ~ age + hmo + died, family = posnegbinomial(), 
##     data = dat)
## 
## Pearson residuals:
##                Min      1Q  Median     3Q    Max
## loge(munb)  -1.414 -0.7061 -0.2055 0.4501 10.479
## loge(size) -18.407 -0.3057  0.4425 0.7522  1.051
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept):1  2.40833    0.07138  33.740  < 2e-16 ***
## (Intercept):2  0.56864    0.05456  10.423  < 2e-16 ***
## age           -0.01569    0.01300  -1.207   0.2275    
## hmo1          -0.14706    0.05879  -2.501   0.0124 *  
## died1         -0.21777    0.04606  -4.728 2.27e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Number of linear predictors:  2 
## 
## Names of linear predictors: loge(munb), loge(size)
## 
## Dispersion Parameter for posnegbinomial family:   1
## 
## Log-likelihood: -4755.28 on 2981 degrees of freedom
## 
## Number of iterations: 4

The output looks very much like the output from an OLS regression:

Now let's look at a plot of the residuals versus fitted values. We add random horizontal noise as well as 50 percent transparency to alleviate over plotting and better see where most residuals fall. Note that these residuals are for the mean prediction.

output <- data.frame(resid = resid(m1)[, 1], fitted = fitted(m1))
ggplot(output, aes(fitted, resid)) + geom_jitter(position = position_jitter(width = 0.25), 
    alpha = 0.5) + stat_smooth(method = "loess")

plot of chunk ztnb-residuals
The mean is around zero across all the fitted levels it looks like. However, there are some values that look rather extreme. To see if these have much influence, we can fit lines using quantile regression, these lines represent the 75th, 50th, and 25th percentiles.

ggplot(output, aes(fitted, resid)) +
  geom_jitter(position=position_jitter(width=.25), alpha=.5) +
  stat_quantile(method="rq")
## Smoothing formula not specified. Using: y ~ x

plot of chunk ztnb-quantile
Here we see the spread narrowing at higher levels. Let's cut the data into intervals and check box plots for each. We will get the breaks from the algorithm for a histogram.

output <- within(output, {
  broken <- cut(fitted, hist(fitted, plot=FALSE)$breaks)
})

ggplot(output, aes(broken, resid)) +
 geom_boxplot() +
 geom_jitter(alpha=.25)

plot of chunk ztnb-reshist
The variance seems to decrease slightly at higher fitted values, except for the very last category (this shown by the hinges of the boxplots).

To test whether we need to estimate over dispersion, we could fit a zero-truncated Poisson model and compare the two.

m2 <- vglm(formula = stay ~ age + hmo + died, family = pospoisson(), data = dat)

## change in deviance
(dLL <- 2 * (logLik(m1) - logLik(m2)))
## [1] 4307.039
## p-value, 1 df---the overdispersion parameter
pchisq(dLL, df = 1, lower.tail = FALSE)
## [1] 0

Based on this, we would conclude that the negative binomial model is a better fit to the data.

We can get confidence intervals for the parameters and the exponentiated parameters using bootstrapping. For the negative binomial model, these would be incident risk ratios. We use the boot package. First, we get the coefficients from our original model to use as start values for the model to speed up the time it takes to estimate. Then we write a short function that takes data and indices as input and returns the parameters we are interested in. Finally, we pass that to the boot function and do 1200 replicates, using snow to distribute across four cores. Note that you should adjust the number of cores to whatever your machine has. Also, for final results, one may wish to increase the number of replications to help ensure stable results.

dput(round(coef(m1),3))
## structure(c(2.408, 0.569, -0.016, -0.147, -0.218), .Names = c("(Intercept):1", 
## "(Intercept):2", "age", "hmo1", "died1"))
f <- function(data, i, newdata) {
  require(VGAM)
  m <- vglm(formula = stay ~ age + hmo + died, family = posnegbinomial(),
    data = data[i, ], coefstart = c(2.408, 0.569, -0.016, -0.147, -0.218))
  mparams <- as.vector(t(coef(summary(m))[, 1:2]))
  yhat <- predict(m, newdata, type = "response")
  return(c(mparams, yhat))
}

## newdata for prediction
newdata <- expand.grid(age = 1:9, hmo = factor(0:1), died = factor(0:1))
newdata$yhat <- as.vector(predict(m1, newdata, type = "response"))

set.seed(10)
res <- boot(dat, f, R = 1200, newdata = newdata, parallel = "snow", ncpus = 4)

The results are alternating parameter estimates and standard errors for the parameters (the first 10). That is, the first row has the first parameter estimate from our model. The second has the standard error for the first parameter. The third column contains the bootstrapped standard errors.

Now we can get the confidence intervals for all the parameters. We start on the original scale with percentile and basic bootstrap CIs.

## basic parameter estimates with percentile and bias adjusted CIs
parms <- t(sapply(c(1, 3, 5, 7, 9), function(i) {
  out <- boot.ci(res, index = c(i, i + 1), type = c("perc", "basic"))
  with(out, c(Est = t0, pLL = percent[4], pUL = percent[5],
    basicLL = basic[4], basicLL = basic[5]))
}))

## add row names
row.names(parms) <- names(coef(m1))
## print results
parms
##                       Est         pLL         pUL     basicLL     basicLL
## (Intercept):1  2.40832736  2.26287029  2.55285268  2.26380204  2.55378442
## (Intercept):2  0.56863871  0.43811928  0.70414706  0.43313036  0.69915814
## age           -0.01569284 -0.04233407  0.01089443 -0.04228012  0.01094839
## hmo1          -0.14705739 -0.26275947 -0.03930690 -0.25480789 -0.03135531
## died1         -0.21777131 -0.32846094 -0.11476423 -0.32077839 -0.10708168

The bootstrapped confidence intervals are wider than would be expected using a normal based approximation. The bootstrapped CIs are more consistent with the CIs from Stata when using robust standard errors.

Now we can estimate the incident risk ratio (IRR) for the negative binomial model. This is done using almost identical code as before, but passing a transformation function to the h argument of boot.ci, in this case, exp to exponentiate.

## exponentiated parameter estimates with percentile and bias adjusted CIs
expparms <- t(sapply(c(1, 3, 5, 7, 9), function(i) {
  out <- boot.ci(res, index = c(i, i + 1), type = c("perc", "basic"), h = exp)
  with(out, c(Est = t0, pLL = percent[4], pUL = percent[5],
    basicLL = basic[4], basicLL = basic[5]))
}))

## add row names
row.names(expparms) <- names(coef(m1))
## print results
expparms
##                      Est       pLL        pUL   basicLL    basicLL
## (Intercept):1 11.1153536 9.6106350 12.8436905 9.3870166 12.6200721
## (Intercept):2  1.7658616 1.5497898  2.0221212 1.5096019  1.9819333
## age            0.9844296 0.9585495  1.0109540 0.9579053  1.0103098
## hmo1           0.8632444 0.7689268  0.9614556 0.7650333  0.9575620
## died1          0.8043094 0.7200311  0.8915763 0.7170424  0.8885877

The results are consistent with what we initially viewed graphically, age does not have a significant effect, but hmo and died both do. In order to better understand our results and model, let's plot some predicted values. Because all of our predictors were categorical (hmo and died) or had a small number of unique values (age) we will get predicted values for all possible combinations. This was actually done earlier when we bootstrapped the parameter estimates by creating a new data set using the expand.grid function, then estimating the predicted values using the predict function. Now we can plot that data.

ggplot(newdata, aes(x = age, y = yhat, colour = hmo))  +
  geom_point() +
  geom_line() +
  facet_wrap(~ died)

plot of chunk ztnb-predict
If we really wanted to compare the predicted values, we could bootstrap confidence intervals around the predicted estimates. These confidence intervals are not for the predicted value themselves, but that that is the mean predicted value (i.e., for the estimate, not a new individual). Because fitting these models is slow, we included the predicted values earlier when we bootstrapped the model parameters. We will go back to the bootstrap output now and get the confidence intervals for the predicted values.

## get the bootstrapped percentile CIs
yhat <- t(sapply(10 + (1:nrow(newdata)), function(i) {
  out <- boot.ci(res, index = i, type = c("perc"))
  with(out, c(Est = t0, pLL = percent[4], pUL = percent[5]))
}))

## merge CIs with predicted values
newdata <- cbind(newdata, yhat)

## graph with CIs
ggplot(newdata, aes(x = age, y = yhat, colour = hmo, fill = hmo))  +
  geom_ribbon(aes(ymin = pLL, ymax = pUL), alpha = .25) +
  geom_point() +
  geom_line() +
  facet_wrap(~ died)

plot of chunk ztnb-predCI

Things to consider

References