Jeromy Anglim's Blog: Psychology and Statistics


Friday, May 18, 2012

Example Reproducible Report using R Markdown: Analysis of California Schools Test Data

This is a quick set of analyses of the California Test Score dataset. The post was produced using R Markdown in RStudio 0.96. The main purpose of this post is to provide a case study of using R Markdown to prepare a quick reproducible report. It provides examples of using plots, output, in-line R code, and markdown. The post is designed to be read along side the R Markdown source code, which is available as a gist on github.

Preliminaries

Load packages and data

# if necessary uncomment and install packages.  install.packages('AER')
# install.packages('psych') install.packages('Hmisc')
# install.packages('ggplot2') install.packages('relaimpo')
library(AER)  # interesting datasets
library(psych)  # describe and psych.panels
library(Hmisc)  # describe
library(ggplot2)  # plots: ggplot and qplot
library(relaimpo)  # relative importance in regression
# load the California Schools Dataset and give the dataset a shorter name
data(CASchools)
cas <- CASchools

# Convert grade to numeric

# table(cas$grades)
cas$gradesN <- cas$grades == "KK-08"

# Get the set of numeric variables
v <- setdiff(names(cas), c("district", "school", "county", "grades"))

Q 1 What does the CASchools dataset involve?

Quoting the help (i.e., ?CASchools), the data is “from all 420 K-6 and K-8 districts in California with data available for 1998 and 1999” and the variables are:

* district: character. District code.
* school: character. School name.
* county: factor indicating county.
* grades: factor indicating grade span of district.
* students: Total enrollment.
* teachers: Number of teachers.
* calworks: Percent qualifying for CalWorks (income assistance).
* lunch: Percent qualifying for reduced-price lunch.
* computer: Number of computers.
* expenditure: Expenditure per student.
* income: District average income (in USD 1,000).
* english: Percent of English learners.
* read: Average reading score.
* math: Average math score.

Let's look at the basic structure of the data frame. i.e., the number of observations and the types of values:

str(cas)
## 'data.frame':    420 obs. of  15 variables:
##  $ district   : chr  "75119" "61499" "61549" "61457" ...
##  $ school     : chr  "Sunol Glen Unified" "Manzanita Elementary" "Thermalito Union Elementary" "Golden Feather Union Elementary" ...
##  $ county     : Factor w/ 45 levels "Alameda","Butte",..: 1 2 2 2 2 6 29 11 6 25 ...
##  $ grades     : Factor w/ 2 levels "KK-06","KK-08": 2 2 2 2 2 2 2 2 2 1 ...
##  $ students   : num  195 240 1550 243 1335 ...
##  $ teachers   : num  10.9 11.1 82.9 14 71.5 ...
##  $ calworks   : num  0.51 15.42 55.03 36.48 33.11 ...
##  $ lunch      : num  2.04 47.92 76.32 77.05 78.43 ...
##  $ computer   : num  67 101 169 85 171 25 28 66 35 0 ...
##  $ expenditure: num  6385 5099 5502 7102 5236 ...
##  $ income     : num  22.69 9.82 8.98 8.98 9.08 ...
##  $ english    : num  0 4.58 30 0 13.86 ...
##  $ read       : num  692 660 636 652 642 ...
##  $ math       : num  690 662 651 644 640 ...
##  $ gradesN    : logi  TRUE TRUE TRUE TRUE TRUE TRUE ...
# Hmisc::describe(cas) # For more extensive summary statistics

Q. 2 To what extent does expenditure per student vary?

qplot(expenditure, data = cas) + xlim(0, 8000) + xlab("Money spent per student ($)") + 
    ylab("Count of schools")

plot of chunk cas2


round(t(psych::describe(cas$expenditure)), 1)
##            [,1]
## var         1.0
## n         420.0
## mean     5312.4
## sd        633.9
## median   5214.5
## trimmed  5252.9
## mad       487.2
## min      3926.1
## max      7711.5
## range    3785.4
## skew        1.1
## kurtosis    1.9
## se         30.9

The greatest expenditure per student is around double that of the least expenditure per student.

Q. 3a What predicts expenditure per student?

# Compute and format set of correlations
corExp <- cor(cas["expenditure"], cas[setdiff(v, "expenditure")])
corExp <- round(t(corExp), 2)
corExp[order(corExp[, 1], decreasing = TRUE), , drop = FALSE]
##          expenditure
## income          0.31
## read            0.22
## math            0.15
## calworks        0.07
## lunch          -0.06
## computer       -0.07
## english        -0.07
## teachers       -0.10
## students       -0.11
## gradesN        -0.17

More is spent per student in schools :

  1. where people with greater incomes live
  2. reading scores are higher
  3. that are K-6

Q. 4 what is the relationship between district level maths and reading scores?

ggplot(cas, aes(read, math)) + geom_point() + geom_smooth()

plot of chunk cas4

At the district level, the correlation is very strong (r = The correlation is 0.92). From prior experience I'd expect correlations at the individual-level in the .3 to .6 range. Thus, these results are consistent with group-level relationships being much larger than individual-level relationships.

Q. 5 What is the relationship between maths and reading after partialling out other effects?

# command has strange syntax requiring column numbers rather than variable
# names
partial.r(cas[v], c(which(names(cas[v]) == "read"), which(names(cas[v]) == 
    "math")), which(!names(cas[v]) %in% c("read", "math")))
## partial correlations 
##      read math
## read 1.00 0.72
## math 0.72 1.00

The partial correlation is still very strong but is substantially reduced.

Q. 6 What fraction of a computer does each student have?

cas$compstud <- cas$computer/cas$students
describe(cas$compstud)
## cas$compstud 
##       n missing  unique    Mean     .05     .10     .25     .50     .75 
##     420       0     412  0.1359 0.05471 0.06654 0.09377 0.12546 0.16447 
##     .90     .95 
## 0.22494 0.24906 
## 
## lowest : 0.00000 0.01455 0.02266 0.02548 0.04167
## highest: 0.32770 0.34359 0.34979 0.35897 0.42083 
qplot(compstud, data = cas)
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.

plot of chunk unnamed-chunk-4

The mean number of computers per student is 0.136.

Q. 7 What is a good model of the combined effect of other variables on academic performance (i.e., math and read)?

# Examine correlations between variables
psych::pairs.panels(cas[v])

plot of chunk cas7

pairs.panels shows correlations in the upper triangle, scatterplots in the lower triangle, and variable names and distributions on the main diagonal.
After examining the plot several ideas emerge.

# (a) students is a count and could be log transformed
cas$studentsLog <- log(cas$students)

# (b) teachers is not the variable of interest:
#   it is the number of students per teacher
cas$studteach <- cas$students /cas$teachers
# (c) computers is not the variable of interest:
#  it is the ratio of computers to students
# table(cas$computer==0) 
# Note some schools have no computers so ratio would be problematic.
# Take percentage of a computer instead
cas$compstud <- cas$computer / cas$students 

# (d) math and reading are correlated highly, reduce to one variable
cas$performance <- as.numeric(
        scale(scale(cas$read) + scale(cas$math)))

Normally, I'd add all these transformations to an initial data transformation file that I call in the first block, but for the sake of the narrative, I'll leave them here.

Let's examine correlations between predictors and outcome.

m1cor <- cor(cas$performance, cas[c("studentsLog", "studteach", "calworks", 
    "lunch", "compstud", "income", "expenditure", "gradesN")])
t(round(m1cor, 2))
##              [,1]
## studentsLog -0.12
## studteach   -0.23
## calworks    -0.63
## lunch       -0.87
## compstud     0.27
## income       0.71
## expenditure  0.19
## gradesN     -0.16

Let's examine the multiple regression.

m1 <- lm(performance ~ studentsLog + studteach + calworks + lunch + 
    compstud + income + expenditure + grades, data = cas)
summary(m1)
## 
## Call:
## lm(formula = performance ~ studentsLog + studteach + calworks + 
##     lunch + compstud + income + expenditure + grades, data = cas)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8107 -0.2963 -0.0118  0.2712  1.5662 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  8.99e-01   4.98e-01    1.80    0.072 .  
## studentsLog -3.83e-02   1.91e-02   -2.01    0.045 *  
## studteach   -1.11e-02   1.59e-02   -0.70    0.487    
## calworks     1.96e-03   2.96e-03    0.66    0.508    
## lunch       -2.65e-02   1.48e-03  -17.97  < 2e-16 ***
## compstud     7.88e-01   3.86e-01    2.04    0.042 *  
## income       2.82e-02   4.89e-03    5.77  1.6e-08 ***
## expenditure  5.87e-05   4.90e-05    1.20    0.232    
## gradesKK-08 -1.21e-01   6.49e-02   -1.87    0.062 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
## 
## Residual standard error: 0.457 on 411 degrees of freedom
## Multiple R-squared: 0.795,   Adjusted R-squared: 0.791 
## F-statistic:  199 on 8 and 411 DF,  p-value: <2e-16 
## 

And some indicators of predictor relative importance.

# calc.relimp from relaimpo package.
(m1relaimpo <- calc.relimp(m1, type = "lmg", rela = TRUE))
## Response variable: performance 
## Total response variance: 1 
## Analysis based on 420 observations 
## 
## 8 Regressors: 
## studentsLog studteach calworks lunch compstud income expenditure grades 
## Proportion of variance explained by model: 79.48%
## Metrics are normalized to sum to 100% (rela=TRUE). 
## 
## Relative importance metrics: 
## 
##                  lmg
## studentsLog 0.009973
## studteach   0.016695
## calworks    0.177666
## lunch       0.492866
## compstud    0.025815
## income      0.251769
## expenditure 0.014785
## grades      0.010432
## 
## Average coefficients for different model sizes: 
## 
##                   1X        2Xs        3Xs        4Xs        5Xs
## studentsLog -0.08771 -0.0650133 -0.0558756 -0.0519312 -4.926e-02
## studteach   -0.11918 -0.0861199 -0.0629499 -0.0462155 -3.372e-02
## calworks    -0.05473 -0.0427576 -0.0324658 -0.0233760 -1.535e-02
## lunch       -0.03199 -0.0310310 -0.0301497 -0.0293300 -2.856e-02
## compstud     4.15870  3.0673338  2.2639604  1.6844348  1.287e+00
## income       0.09860  0.0850555  0.0726892  0.0614726  5.140e-02
## expenditure  0.00030  0.0001986  0.0001374  0.0001013  8.061e-05
## grades      -0.45677 -0.3345683 -0.2529014 -0.1981200 -1.628e-01
##                    6Xs        7Xs        8Xs
## studentsLog -4.626e-02 -4.252e-02 -3.833e-02
## studteach   -2.418e-02 -1.687e-02 -1.109e-02
## calworks    -8.399e-03 -2.612e-03  1.962e-03
## lunch       -2.785e-02 -2.718e-02 -2.654e-02
## compstud     1.034e+00  8.828e-01  7.884e-01
## income       4.250e-02  3.477e-02  2.821e-02
## expenditure  6.882e-05  6.206e-05  5.871e-05
## grades      -1.414e-01 -1.291e-01 -1.215e-01

Thus, we can conclude that:

  1. Income and indicators of income (e.g., low levels of lunch vouchers) are the two main predictors. Thus, schools with greater average income tend to have better student performance.
  2. Schools with more computers per student have better student performance.
  3. Schools with fewer students per teacher have better student performance.

For more information about relative importance and the relaimpo package measures check out Ulrike Grömping's website.
Of course this is all observational data with the usual caveats regarding causal interpretation.

Now, let's look at some weird stuff.

Q. 8.1 What are common words in Californian School names?

# create a vector of the words that occur in school names
lw <- unlist(strsplit(cas$school, split = " "))

# create a table of the frequency of school names
tlw <- table(lw)

# extract cells of table with count greater than 3
tlw2 <- tlw[tlw > 3]

# sorted in decreasing order
tlw2 <- sort(tlw2, decreasing = TRUE)

# values as proporitions
tlw2p <- round(tlw2/nrow(cas), 3)

# show this in a bar graph
tlw2pdf <- data.frame(word = names(tlw2p), prop = as.numeric(tlw2p), 
    stringsAsFactors = FALSE)
ggplot(tlw2pdf, aes(word, prop)) + geom_bar() + coord_flip()

plot of chunk unnamed-chunk-8

# make it log counts
ggplot(tlw2pdf, aes(word, log(prop * nrow(cas)))) + geom_bar() + 
    coord_flip()

plot of chunk unnamed-chunk-9

The word “Elementary” appears in almost all school names (98.3%). The word “Union” appears in around half (43.3%).

Other common words pertain to:

  • Directions (e.g., South, West),
  • Features of the environment (e.g., Creek, Vista, View, Valley)
  • Spanish words (e.g., rio for river; san for saint)

Q. 8.2 Is the number of letters in the school's name related to academic performance?

cas$namelen <- nchar(cas$school)
table(cas$namelen)
## 
## 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 37 38 39 
##  1  4  9 26 28 31 33 27 30 45 38 28 36 30 18 10  5  4  6  3  1  2  2  2  1 
round(cor(cas$namelen, cas[, c("read", "math")]), 2)
##      read math
## [1,] 0.03    0

The answer appears to be “no”.

Q. 8.3 Is the number of words in the school name related to academic performance?

cas$nameWordCount <- sapply(strsplit(cas$school, " "), length)
table(cas$nameWordCount)
## 
##   2   3   4   5 
## 140 202  72   6 
round(cor(cas$nameWordCount, cas[, c("read", "math")]), 2)
##      read math
## [1,] 0.05 0.01

The answer appears to be “no”.

Q. 8.4 Are schools with nice popular nature words in their name doing better academically?

tlw2p  #recall the list of popular names
## lw
## Elementary      Union       City     Valley      Joint       View 
##      0.983      0.433      0.060      0.040      0.031      0.019 
##   Pleasant        San      Creek        Oak      Santa       Lake 
##      0.017      0.017      0.014      0.014      0.014      0.012 
##   Mountain       Park        Rio      Vista      Grove   Lakeside 
##      0.012      0.012      0.012      0.012      0.010      0.010 
##      South    Unified       West 
##      0.010      0.010      0.010 
# Create a quick and dirty list of popular nature names
naturenames <- c("Valley", "View", "Creek", "Lake", "Mountain", "Park", 
    "Rio", "Vista", "Grove", "Lakeside")

# work out whether the word is in the school name
schsplit <- strsplit(cas$school, " ")
cas$hasNature <- sapply(schsplit, function(X) length(intersect(X, 
    naturenames)) > 0)
round(cor(cas$hasNature, cas[, c("read", "math")]), 2)
##      read math
## [1,] 0.09 0.08

So we've found a small correlation.

Let's graph the data to see what it means:

ggplot(cas, aes(hasNature, read)) + geom_boxplot() + geom_jitter(position = position_jitter(width = 0.1)) + 
    xlab("Has a nature name") + ylab("Mean student reading score")

plot of chunk unnamed-chunk-14

So in the sample nature schools have slightly better reading score (and if we were to graph it, maths scores). However, the number of schools having nature names is actually somewhat small (n= 61) despite the overall quite large sample size.

But is it statistically significant?

t.read <- t.test(cas[cas$hasNature, "read"], cas[!cas$hasNature, 
    "read"])
t.math <- t.test(cas[cas$hasNature, "math"], cas[!cas$hasNature, 
    "math"])

So, the p-value is less than .05 for reading (p = 0.046) but not quite for maths (p = 0.083). Bingo! After a little bit of data fishing we have found that reading scores are “significantly” greater for those schools with the listed nature names.

But wait: I've asked three separate exploratory questions or perhaps six if we take maths into account.

  • $\frac{.05}{3} =$ 0.0167
  • $\frac{.05}{6} =$ 0.0083

At these Bonferonni corrected p-values, the result is non-significant. Oh well…

Review

Anyway, the aim of this post was not to make profound statements about California schools. Rather the aim was to show how easy it is to produce quick reproducible reports with R Markdown. If you haven't already, you may want to open up the R Markdown file used to produce this post in RStudio, and compile the report yourself.

In particular, I can see R Markdown being my tool of choice for:

  • Blog posts
  • Posts to StackExchange sites
  • Materials for training workshops
  • Short consulting reports, and
  • Exploratory analyses as part of a larger project.

The real question is how far I can push Markdown before I start to miss the control of LaTeX. Markdown does permit arbitrary HTML. Anyway, if you have any thoughts about the scope of R Markdown, feel free to add a comment.