Unstandardizing coefficients in order to interpret them on the original scale can be needed when explanatory variables were standardized to help with model convergence when fitting generalized linear mixed models. Here I show one approach to unstandardizing for a generalized linear mixed model fit with lme4.
Checking for model fit from generalized linear mixed models (GLMM) can be challenging. The DHARMa package helps with this by giving simulated residuals but doesn't work with all model types. I show how to use tools in DHARMa to extend it for use with unsupported models fit with glmmTMB() and zeroinfl().
I currently work as a consulting statistician, advising natural and social science researchers on statistics, statistical programming, and study design. I create and teach R workshops for applied science graduate students who are just getting started in R, where my goal is to make their transition to a programming language as smooth as possible. See my workshop materials at my website.
Making a background color gradient in ggplot2
I was recently making some arrangements for the 2020 eclipse in South America, which of course got me thinking of the day we were lucky enough to have a path of totality come to us. We have a weather station that records local temperature every 5 minutes, so after the eclipse I was able to plot the temperature change over the eclipse as we experienced it at our house. Here is an example of a basic plot I made at the time.
Expanding binomial counts to binary 0/1 with purrr::pmap()
Data on successes and failures can be summarized and analyzed as counted proportions via the binomial distribution or as long format 0/1 binary data. I most often see summarized data when there are multiple trials done within a study unit; for example, when tallying up the number of dead trees out of the total number of trees in a plot. If these within-plot trials are all independent, analyzing data in a binary format instead of summarized binomial counts doesn’t change the statistical results.
More exploratory plots with ggplot2 and purrr: Adding conditional elements
This summer I was asked to collaborate on an analysis project with many response variables. As usual, I planned on automating my initial graphical data exploration through the use of functions and purrr::map() as I’ve written about previously. However, this particular project was a follow-up to a previous analysis. In the original analysis, different variables were analyzed on different scales. I wanted to put the new plots on whatever scale they were analyzed in that analysis.
Many similar models - Part 2: Automate model fitting with purrr::map() loops
When we have many similar models to fit, automating at least some portions of the task can be a real time saver. In my last post I demonstrated how to make a function for model fitting. Once you have made such a function it’s possible to loop through variable names and fit a model for each one. In this post I am specifically focusing on having many response variables with the same explanatory variables, using purrr::map() and friends for the looping.
Many similar models - Part 1: How to make a function for model fitting
I worked with several students over the last few months who were fitting many linear models, all with the same basic structure but different response variables. They were struggling to find an efficient way to do this in R while still taking the time to check model assumptions. A first step when working towards a more automated process for fitting many models is to learn how to build model formulas with paste() and as.
The small multiples plot: how to combine ggplot2 plots with one shared axis
There are a variety of ways to combine ggplot2 plots with a single shared axis. However, things can get tricky if you want a lot of control over all plot elements. I demonstrate three different approaches for this: 1. Using facets, which is built in to ggplot2 but doesn’t allow much control over the non-shared axes. 2. Using package cowplot, which has a lot of nice features but the plot spacing doesn’t play well with a single shared axis.
Embedding subplots in ggplot2 graphics
The idea of embedded plots for visualizing a large dataset that has an overplotting problem recently came up in some discussions with students. I first learned about embedded graphics from package ggsubplot. You can still see an old post about that package and about embedded graphics in general, with examples. However, ggsubplot is no longer maintained and doesn’t work with current versions of ggplot2. I poked around a bit, and found that annotation_custom() is the go-to function for embedding plots in a ggplot2 graphic.
Custom contrasts in emmeans
Following up on a previous post, where I demonstrated the basic usage of package emmeans for doing post hoc comparisons, here I’ll demonstrate how to make custom comparisons (aka contrasts). These are comparisons that aren’t encompassed by the built-in functions in the package. Remember that you can explore the available built-in emmeans functions for doing comparisons via ?"contrast-methods". Table of Contents Reasons for custom comparisons R packages The dataset and model Treatment vs control comparisons Building custom contrasts The contrast() function for custom comparisons Using named lists for better output Using “at” for simple comparisons Multiple custom contrasts at once More complicated custom contrasts Just the code, please Reasons for custom comparisons There are a variety of reasons you might need custom comparisons instead of some of the standard, built-in ones.
Getting started with emmeans
Package emmeans (formerly known as lsmeans) is enormously useful for folks wanting to do post hoc comparisons among groups after fitting a model. It has a very thorough set of vignettes (see the vignette topics here), is very flexible with a ton of options, and works out of the box with a lot of different model objects (and can be extended to others 👍). I’ve started recommending emmeans all the time to students fitting models in R.
Lots of zeros or too many zeros?: Thinking about zero inflation in count data
In a recent lecture I gave a basic overview of zero-inflation in count distributions. My main take-home message to the students that I thought worth posting about here is that having a lot of zero values does not necessarily mean you have zero inflation. Zero inflation is when there are more 0 values in the data than the distribution allows for. But some distributions can have a lot of zeros!