Update to March 2022 of the Temperature Anomaly Model and Plot

The model of the temperature anomaly plot is updated to the most recent data. The data sources are somewhat changed. For the temperature anomaly, the new data is from the NASA website (https://data.giss.nasa.gov/gistemp/tabledata_v4/GLB.Ts+dSST.csv). For the Multivariate El Niño Southern Oscillation Index, the new data is from the NOOA website (https://www.psl.noaa.gov/enso/mei – the Multivariate El Niño Southern Oscillation index is shifted to the right by one-half month by averaging consecutive data points.)

For the carbon dioxide levels, the new data is from the NOOA website (https://gml.noaa.gov/aftp/products/trends/co2/co2_mm_mlo.txt – the de-seasonalized series was used.). For the sunspots, the data is the current series at Royal Observatory of Belgium (https://www.sidc.be/silso/datafiles – the Total sunspot number series is used.)

For the temperature anomaly data, the data from January 1880 to March 2022 is used. For the sunspot data, the data from October 1879 to December 2021 is used. For the Multivariate El Niño Southern Oscillation index and for the carbon dioxide levels, the data from January 1800 to December 1978 is the old data. The data from January 1979 to March 2022 is the new data.

The function to print the results is:

temp.model.7.3.2022.print.and.plot = function(
  ta=ta.nasa.1.1880.5.2022_7.2.22.ts,
  xmat=xmat3.merged.7.2.2022.ts,
  ord=c( 
    3, 
    0, 
    0 
    ),
  seas=list( 
    order=c( 
      2, 
      0, 
      0 
      ), 
    12 
    ),
  sta=c(
    1880,
    1
    ),
  en=c(
    2022,
    3
    )
  ) {
  mod = arima( 
    window( 
      ta, 
      start=sta, 
      end=en
      ), 
    order=ord, 
    seasonal=seas, 
    xreg=xmat 
    )
  
  print( 
    summary(
      mod
      )
    )
  
  plot(
    ta,
    col="grey80",
    xlab="Year",
    ylab="Temperature Anomaly",
    main="Temperature Anomaly and\nthe Fit of the Model",
    font.main=1
    )
  lines(
    seq(
      sta[1]+(sta[2]-1)/12,
      en[1]+(en[2]-1)/12,
      1/12
      ),
    coef(
      mod
      )[ 
        6
        ] + 
        xmat %*% 
          coef(
            mod
            )[
              7:9
              ],
    col="grey10"
    ) 
}

The results from running the function are:

> temp.model.7.3.2022.print.and.plot()


Call:
arima(x = window(ta, start = sta, end = en), order = ord, seasonal = seas, xreg = xmat)



Coefficients:
         ar1     ar2     ar3    sar1    sar2  intercept
      0.4827  0.1830  0.0729  0.0801  0.1421    -3.2730

s.e.  0.0243  0.0266  0.0242  0.0244  0.0242     0.1186
        
        enso     co2  sunspot
      
      0.0469  0.0101    3e-04
s.e.  0.0069  0.0004    1e-04



sigma^2 estimated as 0.01103:  log likelihood = 1424.23,  aic = -2828.46

Note that the current lack of increase in temperature is accounted for by low sunspot numbers and low Multivariate El Niño Southern Oscillation index numbers. The three explanatory variables are plotted below against time.

Advertisement

More Data on Gerrymandering in Iowa – Does Not Look Too Likely

The statehouse districts in Iowa were drawn in 2011. There are 50 senate districts, 25 of which are elected in each election cycle, and 100 house districts. For the senate districts, while the elections in 2012 and 2014 favored the Democrats, in 2016, 2018, and 2020 the Republicans were favored.

In the house, in 2018, Democrats received more votes but Republucans won more senate seats. In 2012, 2014, 2016, and 2020, the Republicans won both the votes and the statehouse.

The reason for the imbalance in 2018 is that Democratic districts are almost always urban and Republican districts are almost always rural, while the urban districts have a higher proportion of Democrats than the rural districts have of Republicans. The code and data can be found at http://rpubs.com/margothtc/732593.

Calculating Vulcan and Cent using R

function( ce=cent.data.6.22.16.lat.dec, m=cnms.12.3 ){
ce = data.matrix( ce )

Vulcan

longitude

vulc.l = t( apply( ce[ , c(44,47,50,53) ], 1 , sort ) )
dvulc.l = cbind( vulc.l[ , 2:4 ] – vulc.l[ , 1:3 ], 360+vulc.l[ , 1 ]-vulc.l[ , 4 ] )
mult = t( apply( dvulc.l, 1, order ) )[ , 4 ]
vulc.ll = ( ( apply( vulc.l, 1, sum )+ mult*360 )*0.25 )%%360

declination

vulc.d = apply( ce[ , c( 45, 48, 51, 54 ) ], 1, sum )*pi/720

vulc.d = ( 180/pi )*asin( sin( vulc.d )*cos( ce[ , 1 ]*pi/180 ) +
cos( vulc.d )*sin( ce[ , 1 ]*pi/180 )*sin( vulc.ll*pi/180 ) )

Cent

longitude

cent.l = t( apply( ce[ , c( 38, 41 ) ], 1, sort ) )
dcent.l = cbind( cent.l[ , 2 ]-cent.l[ , 1 ], 360+cent.l[ , 1 ]-cent.l[ , 2 ] )
mult = t( apply( dcent.l, 1, order ) )[ , 2 ]
cent.ll = ( ( apply( cent.l, 1, sum ) + mult*360 )*0.5 )%%360

declination

cent.d = apply( ce[ , c( 39, 42 ) ], 1, sum )*pi/360

cent.d = ( 180/pi )*asin( sin( cent.d )*cos( ce[ , 1 ]*pi/180 ) +
cos( cent.d )*sin( ce[ , 1 ]*pi/180 )*sin( cent.ll*pi/180 ) )

put in matrix

ce = cbind( ce[ , c( 1, c(2,4)+rep( seq( 0, 12, 3 ), each=2 ) ) ],
vulc.ll, vulc.d, ce[ , c( 17, 19, 20, 22, 38, 40 ) ],
ce[ , c( c( 23, 25 ) + rep( seq( 0, 12, 3 ), each=2 ),
c( 44, 46 )+rep( seq( 0, 11, 3 ), each=2 ) ) ], ce[ , c( 41, 43 ) ], cent.ll, cent.d )
colnames( ce ) = paste( m, c( “”, rep( c( “.L”, “.D” ), 20 ) ), sep=”” )

ce
}

Is Iowa gerrymandered?

Iowa is proud of the non-partisan method we use to set our senate and house districts.  But the evidence suggests some gerrymandering may be present.  See the two plots below.

The two plots show the proportion of votes candidates received and the proportion of legislators elected to the statehouse in 2018 for three classes – Democrats, Republicans, and others.

The source of the Iowa voter data is at https://sos.iowa.gov/elections/pdf/2018/general/canvsummary.pdf.  The R function to extract the data for the House is

function( fl="canvsummary.xlsx" ){

    res=vector( mode="list", length=0 ) 
    
     for ( i in 88:187 ){
         re = read_xlsx( fl, sheet=i )
         nre = colnames( re )
         re = unlist( as.numeric( re[ nrow( re ), ] ) )
         names( re ) = nre
         print( re )
         res[[i-87]] = re 
     }
    
    res
}

For the Senate, ’88:187′ is replaced by ’63:87′ and ‘i-87’ by ‘i-62’.

The source for the numbers of legislators in the House is https://www.ncsl.org/Portals/1/Documents/Elections/Legis_Control_112118_26973.pdf.  The row for Iowa is 15. The House data is in columns 8 to 10.

For the Senate, the numbers can be found at https://www.legis.iowa.gov/legislators/senate.  Only half of the senate is elected in any general election.  In 2018, senators of the odd numbered districts were elected.

The R function I used to create the plots for the House was

function( ds=ia.hou.8.7.19, n=100, id=8, lscn=Legis_Control_112118_26973_8_7_19[15,], ch="Iowa House 2018 Election" ){

sm = c( 0, 0, 0 )
names( sm ) = c( "DEM", "REP", "OTHER" )
for ( i in 1:n){
ei = which( grepl( "Write", names( ds[[i]] ) ) )
di = which( grepl( "DEM", names( ds[[i]] ) ) )
ri = which( grepl( "REP", names( ds[[i]] ) ) )
if( length( di )>0 ) sm[1]=sm[1]+ds[[i]][di]
if( length( ri )>0 ) sm[2]=sm[2]+ds[[i]][ri]
sm[3] = sm[3] + sum( ds[[i]][ ( max( di, ri )+1 ):ei ] )
}
print( sm )
par( mfrow=c(1,2), oma=c(2,1,1,1) )
pie( sm, labels=names( sm ), main="Votes", col=c("blue","red","tan"))

lscn[ is.na( lscn )==T ] = 0
print( lscn[ id:(id+2) ] )
pie( as.numeric( lscn[ id:(id+2) ] ), labels=names( sm ), 
main="Legislature", col=c( "blue","red","tan" ) )
mtext( ch, side=1, outer=T, cex=2, font=2 )
}

For the Senate, there were 12 Democrats  and 13 Republicans elected, which were the numbers used in the plot.

I used the program ‘smallpdf’ to convert the pdf’s to Excel spreadsheets, which I accessed using R Studio.  The R code for the legislative file was

library(readxl)
Legis_Control_112118_26973_8_7_19 <- read_excel("Legis_Control_112118_26973.8.7.19.xlsx", sheet = "Table 2", col_names = FALSE, skip = 0)

 

Global Warming: A Look at the Data

The greenhouse effect drives the temperature increase we have seen at the global level over the last 140 years.  Below is a pdf of an power point presentation that I created for a local town hall on environmental issues.  I look at sunspots, an ENSO index, and carbon dioxide levels.  All three are important, but CO2 has the strongest correlation to temperature.  I do some plots and fit an arima model.

Presentation Earth Day 2019 – click here to see the pdfPresentation Earth Day 2019 2

Are Atlantic Storms Increasing?

I fit three simple linear regression models to the logarithm of the number of storms plus one for the number of tropical storms, hurricanes, and major hurricanes from 1851 to 2014.  The data can be found at

https://www.nhc.noaa.gov/climo/images/AtlanticStormTotalsTable.pdf

The results of the regressions – done in R – are below, as well as a plot of the models and data, where the fitted lines are found by taking the exponential of the fit to the model and subtracting one from the result.  The blue lines are the fit and the green lines are 96.6% prediction intervals.

atl.strm

The models fit (to the log of the number plus one) are below.  The data indicate a 0.39% cumulative increase in the number of tropical storms per year, a 0.19% cumulative increase in the number of hurricanes per year, and a 0.47% cumulative increase in the number of major hurricanes per year over the time period 1851 to 2014.

The Three Models:

Tropical Storms over time:

Call: lm(formula = lds[[i]] ~ ds[[1]])
Residuals: 
  Min         1Q          Median         3Q              Max
-1.50491    -0.24411    0.02969        0.22772         0.90256
Coefficients:            
            Estimate    Std. Error  t value   Pr(>|t|)
(Intercept) -5.237098    1.117248   -4.687    5.83e-06 ***
ds[[1]]      0.003885    0.000578    6.721    2.92e-10 ***
---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.3504 on 162 degrees of freedom  

Multiple R-squared: 0.2181, Adjusted R-squared: 0.2132  

F-statistic: 45.17 on 1 and 162 DF,   p-value: 2.922e-10

Hurricanes over time:

Call: lm(formula = lds[[i]] ~ ds[[1]])
Residuals: 
  Min        1Q         Median    3Q         Max
 -1.74129   -0.24892   0.01169   0.27125    0.85703
Coefficients:            
            Estimate     Std. Error   t value   Pr(>|t|)
(Intercept) -1.9239955    1.3290087   -1.448   0.14964
ds[[1]]      0.0019150    0.0006875    2.785   0.00598 **
---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4168 on 162 degrees of freedom

Multiple R-squared: 0.0457, Adjusted R-squared: 0.03981

F-statistic: 7.758 on 1 and 162 DF,   p-value: 0.005984

Major Hurricanes over time:

Call: lm(formula = lds[[i]] ~ ds[[1]])
Residuals:
  Min        1Q         Median    3Q       Max
-1.2755    -0.3994    0.0403    0.3947   1.0793
Coefficients:            
            Estimate   Std. Error  t value  Pr(>|t|)
(Intercept) -8.1698419 1.6829089   -4.855   2.82e-06 ***
ds[[1]]      0.0046922 0.0008706    5.390   2.45e-07 ***
---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5278 on 162 degrees of freedom

Multiple R-squared: 0.152, Adjusted R-squared: 0.1468
F-statistic: 29.05 on 1 and 162 DF,   p-value: 2.454e-07

Comments:

It is known that we are doing a better job of counting storms than in the 19th century, since some storms never make landfall, so would not necessarily have been counted in the 1800’s or early 1900’s.  However, the largest increase above is for major hurricanes – which should have been measured quite well over the full time period of the data.

The R Code:

atl.strm.plot.fun <- function(ds = atl.strm) {
  
  print(ds[1:10,])
  lds = log(ds[,2:4]+1)
  mod.log.ds = list(1,2,3)
  pred.ds = list(1,2,3)
  
  for (i in 1:3) {
    
    mod.log.ds[[i]] = lm(lds[[i]]~ds[[1]])
    pred.ds[[i]] = predict(mod.log.ds[[i]], interval="predict", level=0.966)
    
  }
  
  pred.ds = lapply(pred.ds,exp)
  pred.ds = lapply(pred.ds, "-", 1)
  yl=c(35,20,12)
  
  par(mfrow=c(3,1), mar=c(4,4,2,1), oma=c(1,1,3,1))
  
  for (i in 1:3) {
    plot(ds[[1]],ds[[i+1]], ylab=colnames(ds)[i+1], xlab="Year", 
         cex=.75, ylim=c(0,yl[i]))
    lines(ds[[1]],pred.ds[[i]][,1], col="blue")
    lines(ds[[1]],pred.ds[[i]][,2], col="green")
    lines(ds[[1]],pred.ds[[i]][,3], col="green")
  }
  mtext("Atlantic Storms", side=3, font=1, outer=T, line=1)
  
  lapply(mod.log.ds,summary)
  
}

The First Ten Lines of the Data Set:

   Year Tropical_Storms  Hurricanes  Major_Hurricanes
1  1851       6              3             1
2  1852       5              5             1
3  1853       8              4             2
4  1854       5              3             1
5  1855       5              4             1
6  1856       6              4             2
7  1857       4              3             0
8  1858       6              6             0
9  1859       8              7             1
10 1860       7              6             1

I Am Back

Sorry for my long absence. I had some health problems. I will get back to the economic problem soon.

Storms

Here is a plot of the costs of weather disasters in the USA since 1980 – data from the NOAA website, https://www.ncdc.noaa.gov/billions/time-series . These are the disasters costing more than $1,000,000,000. I fit the exponential trend by fitting a linear trend to the log of the cost data plus 1 (since there was a year with no billion dollar storms and the log of zero is minus infinity, while the log of 1 is zero), then taking the exponential of the fitted line and subtracting 1 from the result. I find the result scary, particularly with the costs of 2005 and 2017. These costs are inflation adjusted.

The Greenhouse Effect

Here is a plot that I did last year of a time series model, where I fit temperature anomaly to sunspot numbers, carbon dioxide levels, and the ENSO index.

Here is the R output.

Call:

arima(x = temp.anomaly.7.71.to.11.13, order = c(3, 0, 0), seasonal = c(2, 0, 0), xreg = xmat)

Coefficients:

ar1 ar2 ar3 sar1 sar2 intercept enso co2 sunspot

0.4738 0.1751 0.0954 0.1187 0.1391 -2.9108 0.0563 0.0089 5e-04

s.e. 0.0241 0.0265 0.0242 0.0246 0.0243 0.1573 0.0074 0.0005 2e-04

sigma^2 estimated as 0.01281: log likelihood = 1297.64, aic = -2575.27

The temperature anomaly is from Berkeley Earth of monthly global average temperature data for land and ocean, http://berkeleyearth.org/data. The temperature series goes from July 1871 to November 2013.

The data for the sunspots come from the WDC-SILSO at the Royal Observatory of Belgium, Brussels, http://sidc.oma.be/silso/datafiles, where the data is the previous data from the site (the data changed in July of 2015). The sunspot series goes from April 1871 to August 2013.

The data for the CO2 level is from the NOAA website, The CO2 level has been smoothed by a twelve month moving average and some work went into combining the two series. See https://vanwardstat.wordpress.com/2015/08/14/fitting-sunspot-numbers-and-co2-concentrations-to-temperature-anomalies/ The data go from July 1871 to November 2013.

The index for the ENSO, at NOAA, has been averaged for the years for which the two data sets overlap and the resulting series has been shifted to the right by one half month using the average of two consecutive values. The data are at https://www.esrl.noaa.gov/psd/enso/mei.ext/table.ext.html and https://www.esrl.noaa.gov/psd/enso/mei/table.html. The seasonal component of the ENSO index was removed with a twelve month moving average. The data run from July 1871 to November 2013.

This is a quick and dirty regression. That the model fits so well is remarkable. Maybe someone who is getting paid for the work will do a more careful job.

Wages and Inflation 1964 to 2017

Many years ago, in the mid 1970’s, I was living during a time of high inflation and energy price shocks.  At that time, I thought that the energy shocks and inflation were causing real wage income to decrease, by way of inflation, and that the distribution of wealth was shifting in such a way that a person’s control over their income picked the winners and losers.

I did some research on the topic for my creative component at Iowa State in 1985 – for my master’s degree – but the subject has lain fallow in my mind since then, with my having made occasional forays into the data.  This the the beginning of looking at the problem again. Below are CPI-All Urban Consumers (Current Series),  https://data.bls.gov/timeseries/CUUR0000SA0 , and three sets of data from https://www.bls.gov/webapps/legacy/cesbtab8.htm  – bullet list 8, giving average hourly wages for all non-agricultural private workers, goods producers, and service providers.  All the datasets contain monthly figures from January 1964 to October 2017 and are on the Bureau of Labor Statistics website.

inflation and average adjusted wages 1964 to 2017

From the plots, real income was increasing strongly up until around the 1973 Oil Embargo, dropped, rose, fell until the mid to late 1990’s and have been on an upward trend since then, except for the last few years.  Also, producers of goods make more money than service providers, on average.

More on this next month.

R Plots Using plot()

In this post, an example of creating multiple plots on a page, using the plot() function in R, is presented. Four plots are generated using data from some government websites, including the data found in the blog of last month.

To start, the library ‘TeachingTools’ is loaded. Then the layout for the plots is entered.

library("TeachingTools")
layout(matrix(c(1,2,3,4),2,2, byrow=T))

‘TeachingTools’ contains the function shadowtext(), which is used in the third and fourth plots. In layout(), the matrix() function creates a two by two matrix with 1 and 2 in the first row and 3 and 4 in the second row. By default, matrix() reads data into the matrix by columns rather than by rows, so the argument ‘byrow=T’ is necessary to read the data in by rows. The four plots will be created one at a time below. The first plot will be put into the location with value 1, the second with value 2, and so forth.

The first plot is of bankruptcy data, from the US courts website http://www.uscourts.gov/statistics-reports/caseload-statistics-data-tables, and gives types of bankruptcies in Iowa from 2001 to 2015. Most bankruptcies caused by medical expenses are chapter 7 nonbusiness bankruptcies and most of chapter 7 nonbusiness bankruptcies have medical expenses as a contributing factor. The data is given first.

> bankruptcies_ia
Year overalltotal total7 total11 total12 total13 bustotal bus7 bus11
1 2001 11076 10459 28 7 582 289 243 28
2 2002 11808 11186 26 9 587 354 309 25
3 2003 12582 11895 28 12 647 323 264 27
4 2004 13082 12430 23 1 628 360 324 23
5 2005 18709 17771 18 6 914 455 412 18
6 2006 4891 4316 11 3 561 208 182 9
7 2007 7036 6275 11 6 744 243 213 11
8 2008 8125 7383 14 2 726 342 317 14
9 2009 10171 9354 21 5 789 384 351 21
10 2010 9829 9013 36 10 770 381 329 33
11 2011 7965 7231 24 10 700 356 314 24
12 2012 6411 5803 19 9 578 255 223 17
13 2013 5747 5240 10 4 493 230 209 10
14 2014 5079 4607 20 4 448 170 139 19
15 2015 4535 4054 8 14 459 187 154 8
bus12 bus13 nonbustotal nonbus7 nonbus11 nonbus13
1 7 11 10787 10216 0 571
2 9 11 11454 10877 1 576
3 12 20 12259 11631 1 627
4 1 12 12722 12106 0 616
5 6 19 18254 17359 0 895
6 3 14 4683 4134 2 547
7 6 13 6793 6062 0 731
8 2 9 7783 7066 0 717
9 5 5 9787 9003 0 784
10 10 9 9448 8684 3 761
11 10 8 7609 6917 0 692
12 9 5 6156 5580 2 573
13 4 7 5517 5031 0 486
14 4 8 4909 4468 1 440
15 14 11 4348 3900 0 448

Next the plot is created. In the function plot(), the first argument is the vector of x values and the second the vector of y values. In the object ‘bankruptcies_ia’, the variable ‘Year’ is in the first column and chapter 7 nonbusiness bankruptcies are in the thirteenth column. The bankruptcy numbers are divided by 1000 for clarity in the y axis labels. The type of plot is set to a line plot with the argument ‘type=”l”‘. The x and y labels are set by ‘xlab’ and ‘ylab’. The argument ‘main’, gives the heading of the plot. By having the heading on two lines in the call, the heading plots on two lines in the plot. The color and width of the plotted line are given by ‘col=”red4″‘ and ‘lwd=”2″‘. The color of the main heading is given by ‘col.main=”darkred”‘. The color of the labels and axes are also similarly set to “darkred”. The box around the plot and the tick marks are set to grey with the argument ‘fg=”grey”‘. The font for the heading is set to italic with the argument ‘font.main=3’. The limits of the y axis are set to zero and twenty with ‘ylim=c(0,20)’.

plot(bankruptcies_ia[,1], bankruptcies_ia[,13]/1000, type="l",
xlab= "Year",
ylab= "Thousand", main = "Number of Chapter 7 Nonbusiness
Bankruptcies in IA 2001 to 2015", col="red4", lwd="2",
col.main="darkred", col.lab="darkred", col.axis="darkred",
fg="grey", font.main=3, ylim=c(0,20))

The plot is given below. Note, the plot is very plain.

plot 1

To add interest to the plot, the area below the line is filled with a polygon of diagonal lines, done with a call to the function polygon(). In R, new plotting commands can continue to add to a plot until a new plot() function is called (unless R is told not to refresh on a new plot() call). The first argument to polygon() is the vector of x values and the second argument is the vector of y values, giving the vertices of the polygon. In the bankruptcy plot, the polygon starts and ends at the lower left corner. The ‘density’ argument gives the density of the diagonal lines. The color of the lines is set to ‘”red4″‘ and the line width to ‘1’.

polygon(c(2001,2001:2015, 2015),
c(0, bankruptcies_ia[,13]/1000, 0),
density=8, col="red4", lwd=1)

The plot is given below. Note, the plot is easier to evaluate now.

plot 2

The second plot is of property taxes in the Iowa for the years 2004 to 2014. The source of the data is at https://www2.census.gov/govs/local/. See the previous blog post for more information on the individual, yearly files. There is nothing new in the code for the second plot so the code is given without comment, starting with the data.

deflated.property.tax
[1] 3566090 3573975 3557993 3696317 3720224 4043835 4167853 4275320
[9] 4302292 4341761 4344914

plot(2004:2014, deflated.property.tax/1000000, ylim=c(0,6),
ylab="Billion Dollars", xlab="Year", col="red4",
lwd="2", main="Total IA Property Taxes\n2004 to 2014",
col.main="darkred",
col.lab="darkred", col.axis="darkred",
font.main=3, fg="grey", type="l")
polygon(c(2004,2004:2014, 2014),
c(0, deflated.property.tax/1000000, 0),
density=8, col="red4", lwd=1)

The third plot is of non-capital expense spending on hospitals in Iowa from 2004 to 2014. The source of the data is the same as for the previous plot.

The plot is a little more complex. The call to plot() is straightforward. However, the call to plot() is followed by a call to lines(), which plots a second line, followed by two calls to polygon(), for each of two polygons. The data is given below.

> deflated.hospital.expend
year sl s l
[1,] 2004 1644823 734475.9 910347.6
[2,] 2005 1720450 725427.8 995022.7
[3,] 2006 1816380 805646.4 1010733.1
[4,] 2007 1896899 818614.6 1078284.9
[5,] 2008 2040809 854933.6 1185875.8
[6,] 2009 2311303 956767.1 1354536.3
[7,] 2010 2365028 955301.7 1409726.0
[8,] 2011 2470684 1030070.0 1440613.7
[9,] 2012 2643324 1254339.0 1388984.7
[10,] 2013 2676588 1270130.9 1406456.6
[11,] 2014 2796166 1342094.1 1454072.3
>
>
> deflated.hospital.capital.expend
year sl s l
[1,] 2004 125433 76164 49269
[2,] 2005 155842 76703 79139
[3,] 2006 128063 73105 54958
[4,] 2007 170249 96844 73405
[5,] 2008 172931 86148 86783
[6,] 2009 226999 103459 123540
[7,] 2010 223355 59911 163444
[8,] 2011 254702 87990 166712
[9,] 2012 317178 140569 176609
[10,] 2013 315689 126562 189127
[11,] 2014 322188 146256 175932
>

First, two lines, one for state plus local and one for local are plotted. See the plot below the code.

plot(2004:2014, deflated.hospital.expend[,2]/1000000 -
deflated.hospital.capital.expend[,2]/1000000,
ylim=c(0,3.5), ylab="Billion Dollars", xlab="Year",
col=c("red4"), col.main="darkred",
col.lab="darkred", col.axis="darkred",
main="IA Government Hospital Expenditures\n2004 to 2014",
lwd="2", font.main=3, type="l", fg="grey")
lines(2004:2014, deflated.hospital.expend[,4]/1000000-
deflated.hospital.capital.expend[,4]/1000000, lwd="2",
col=c("red1"))

plot 3

Next the two polygons are plotted. The first call to polygon() plots the top polygon. Note that the angle of the lines is 135 degrees rather than the default 45 degrees used in the first two plots. The second call to polygon() plots the bottom polygon. For the second polygon, the angle is 45 degrees and the color is ‘red1’ rather than ‘red4’. By plotting the lower polygon second, the color of the lower line is still  ‘red1’.

polygon(c(2004,2004:2014, 2014:2004),
c(deflated.hospital.expend[1,4]/1000000-
deflated.hospital.capital.expend[1,4]/1000000,
deflated.hospital.expend[,2]/1000000-
deflated.hospital.capital.expend[,2]/1000000,
deflated.hospital.expend[11:1,4]/1000000-
deflated.hospital.capital.expend[11:1,4]/1000000),
density=8, col="red4", lwd=1, angle=135)
polygon(c(2004,2004:2014, 2014),
c(0, deflated.hospital.expend[,4]/1000000-
deflated.hospital.capital.expend[,4]/1000000,
0), density=8, col="red1", lwd=1)

plot 4

Last, some text is added to the plot. The function shadowtext() puts a shadow around text and is found in the package ‘TeachingDemos’. The first argument of shadowtext() is the placement of the text on the x axis, the second on the y axis, the third the text itself. The argument ‘bg=”white”‘ sets a white shadow. The argument ‘r=.3’ sets the size of the shadow as a proportion of the size of the text. The function ‘text’ is similar to ‘shadowtext’, except there is not shadow argument. See the plot below the code.

shadowtext(2011, .75,"Local", col="red1", bg="white", r=.3, font=1.5)
shadowtext(2009, 1.7,"State", col="red4", bg="white", r=.3, font=1.5)
text(2007.5, 2.9,"Minus Capital Outlays", col="darkblue", font=1.5)

plot 5

The fourth plot is of health expenditures by the government in Iowa.  The code does not use anything new, so no comments are made on the code. The data is from the same source as the last two plots and is given below.

> deflated.health.expend
Year StateLocal State Local
[1,] 2004 416871.7 104597.30 312274.4
[2,] 2005 418761.0 98568.17 320192.9
[3,] 2006 466162.7 105526.68 360636.0
[4,] 2007 488350.4 130628.21 357722.2
[5,] 2008 509933.5 135971.63 373961.9
[6,] 2009 538308.7 145292.52 393016.1
[7,] 2010 530603.7 135314.36 395289.3
[8,] 2011 525645.7 129333.71 396312.0
[9,] 2012 568904.7 132340.68 436564.1
[10,] 2013 410256.4 131329.61 278926.8
[11,] 2014 391885.7 124200.98 267684.7
>

The code follows.  Below the code is the final figure, with the four plots.


plot(2004:2014, deflated.health.expend[,2]/1000000, ylim=c(0,.8),
ylab="Billion Dollars", xlab="Year",
col="red4", font.main=3, fg="grey",
col.main="darkred",
col.lab="darkred", col.axis="darkred", type="l",
main="IA Government Health Expenditures\n2004 to 2014",
lwd="2")
lines(2004:2014, deflated.health.expend[,4]/1000000, lwd="2", col="red1")
polygon(c(2004,2004:2014, 2014:2004),
c(deflated.health.expend[1,4]/1000000,
deflated.health.expend[,2]/1000000,
deflated.health.expend[11:1,4]/1000000),
density=8, col="red4", angle=135, lwd="1")
polygon(c(2004, 2004:2014, 2014),
c(0, deflated.health.expend[,4]/1000000, 0),
density=8, col="red1", lwd=1)
shadowtext(2011, .2,"Local", col="red1", bg="white", r=.3, font=1.5)
shadowtext(2008.5, .45,"State", col="red4", bg="white", r=.3, font=1.5)

 

plot 6

That’s it!!

Pulling Data Out of Census Spreadsheets Using R

In this post, I show a method for extracting small amounts of data from somewhat large Census Bureau Excel spreadsheets, using R.  The objects of interest are expenditures of state and local governments on hospital capital in Iowa for the years 2004 to 2014. The data can be found at http://www2.census.gov/govs/local/. The files at the site are yearly files.

The files to be used are those named ‘yrslsstab1a.xls’, where ‘yr‘ is replaced by the two digits of the year for a given year, for example, ’04’ or ’11’. The individual yearly files contain data for the whole country and for all of the states, over all classes of state and local government revenue and expenditures. The task is to extract three data points from each file – state and local expenditures, state expenditures, and local expenditures – for the state of Iowa.

The structure of the files varies from year to year, so first reviewing the files is important. I found two patterns for the expenditure data – data with and data without margins of error. The program locates the columns for Iowa and the row for hospital capital expenditures. Then, the data are extracted and put in a matrix for outputting.

First, character strings of the years are created, to be used in referencing the data sets, and a data frame is created to contain the final result.

years = c(paste("0", 4:9, sep=""), paste(10:14))
hospital.capital.expend <- data.frame(NA,NA,NA)

Second, the library ‘gdata’ is opened. The library ‘gdata’ contains functions useful for manipulating data in R and provides for reading data into R from an URL containing an Excel file.

library(gdata)

Third, a loop is run through the eleven years to fill in the ‘hospital.capital.expend’ data frame with the data from each year. The object ‘fn’ contains the URL of the Excel file for a given year. The function ‘paste’ concatenates the three parts of the URL. Note that ‘sep’ must be set to “” in the function.


for (i in 1:11)
{
fn = paste("http://www2.census.gov/govs/local/",years[i],
"slsstab1a.xls", sep="")

Next, the Excel file is read into the object ‘ex’. The argument ‘header’ is set to ‘F’ so that all of the rows are input. Also, since all of the columns contain some character data, all of the data is forced to be character by setting ‘stringsAsFactors’ to ‘F’.  The function used to read the spreadsheet is ‘read.xls’ in the package ‘gdata’.


ex = read.xls(fn, sheet=1, header=F, stringsAsFactors=F)

Next, the row and column indices of the data are found using the functions ‘grepl’ and ‘which’. The first argument in ‘grepl’ is a pattern to be matched. For a data frame, the ‘grepl’ function returns a logical vector of ‘T’s and ‘F’s of length equal to the number of columns in the data frame – giving ‘T’ if the column contains the pattern and ‘F’ if not. Note that ‘*’ can be used as a wild card in the pattern.  For a character vector, ‘grepl’ returns ‘T’ if an element of the vector matches the pattern and ‘F’ otherwise. 

The ‘which’ function returns the indices of a logical vector which have the value ‘T’. So, ‘ssi1’ contains the index of the column containing ‘Hospital’ and ‘ssi2’ contains the index of the column containing ‘Iowa’. The object ‘ssi4’ contains the rows containing ‘Hospital’, since ‘ex[,ssi1]’ is a character vector instead of a data frame.   For all of the eleven years, the second incidence of ‘Hospital’ in the ‘Hospital’ column contains hospital expenditures.


ssi1 = which(grepl("*Hospital*", ex, ignore.case=T))
ssi2 = which(grepl("Iowa", ex, ignore.case=T))
ssi4 = which(grepl("Hospital",ex[,ssi1], ignore.case=T))[2]

Next, the data are extracted, and the temporary files are removed. If the column index of ‘Iowa’ is less that 80, no margin of error was included and the data points are in the column of ‘Iowa’ and in the next two columns. If the column index of ‘Iowa’ is larger than 79, a margin of error was included and the data are in the column of ‘Iowa’ and the second and third columns to the right.

The capital expenditures are found one row below the ‘Hospital’ row, so one is added to ‘ssi4’ to get the correct row index. The data are put in the data frame ‘df.1’ which is row bound to the data frame ‘hospital.capital.expend’. The names of the columns in ‘df.1’ are set to ‘NA’ so that the row bind will work.  Then the temporary files are removed and the loop ends.


if (ssi2<80) ssi5=ssi2+0:2
else ssi5 = ssi2 + c(0,2,3)
df.1 = data.frame(ex[ssi4+1, ssi5], stringsAsFactors = F)
names(df.1)=c(NA,NA,NA)
hospital.capital.expend = rbind(hospital.capital.expend, df.1)
rm(fn, ex, df.1, ssi1, ssi2, ssi4, ssi5)
}

There are just a few steps left to clean things up. The first row of ‘hospital.capital.expend’, which just contains ‘NA’s, is removed. Then, the commas within the numbers, as extracted from the census file, are removed from the character strings using the function ‘gsub’ and the data frame is converted to a numeric matrix. Next, the eleven years are column bound to the matrix. Last, the columns are given names and the matrix is printed out.


hospital.capital.expend = as.matrix(hospital.capital.expend[-1,])
hospital.capital.expend = matrix(as.numeric(gsub(",","",hospital.capital.expend)),ncol=3)
hospital.capital.expend = cbind(2004:2014,hospital.capital.expend)
colnames(hospital.capital.expend) = c("Year", "State.Local", "State", "Local")
print(hospital.capital.expend)

That’s it!!!