Updates and Working Notes

How to Switch Your Workflow from Stata to R, One Bit at a Time

Wednesday, 15 August 2018

A recent exchange on Twitter reminded me of my switch to R from Stata. I’d started grad school in 1999, before R hit 1.0.0, so I’d been trained exclusively in Stata. By 2008, I had way more than the proverbial 10,000 in-seat hours in Stata, and I knew all the tricks to make it do just what I wanted. I was even Stata Corp.’s on-campus rep at my university. Still, I’d started dabbling in R. Then as now, there were specific things R could do that Stata couldn’t.1 But how to get those advantages without throwing out my hard-earned skills and starting over as a complete n00b? The answer was: a little bit at a time.

Fortunately, it’s not difficult to switch back and forth within a given project, so you can start bringing some R to your Stata-based workflow while leaving it mostly intact. Then, if and when you find yourself doing more in R than in Stata, you can flip and start using Stata from within R.

So, install R and let’s get you started.

Running R from Stata

The trick to running R from within your do-file is first to save the data you want to pass to R, then call the .R file with the commands you want to run in R (the “R script”), then—if necessary—reload the R output into Stata.

While it’s also possible to use Stata’s shell command to run an R script (for illustrative purposes, let’s pretend it’s called my_script.R), Roger Newson’s rsource module makes it particularly easy. Install it as follows:

 ssc install rsource, replace

Unfortunately, the information rsource needs about your R installation is a bit different depending on your OS, but once installed, adding this platform-independent code to your do-file will run the script:

if "`c(os)'"=="MacOSX" | "`c(os)'"=="UNIX" {
    rsource using my_script.R, rpath("/usr/local/bin/R") roptions(`"--vanilla"')
}
else {  // windows
    rsource using my_script.R, rpath(`"c:\r\R-3.5.1\bin\Rterm.exe"') roptions(`"--vanilla"')  // change version number, if necessary
}

Of course, you could choose to skip the whole if-else and just include the line that runs on your machine, but that’s not doing any favors to your collaborators or anyone else trying to reproduce your results. You might also just prefer to specify the rpath and roptions in your profile do-file,2 but again, then you’ll need to let others know to do the same or they won’t be able to run your do-file.

Note, too, that if you don’t have much R code to run, it might be easiest to just keep it in your do-file rather than using a separate script. You can do this using the terminator option to rsource, though a downside to this approach is that it doesn’t allow you to if-else the rsource command by your OS. In the do-file below, I also use the regsave module to save my results to pass them to R; install it using ssc install regsave, replace.

clear
set more off

sysuse auto, clear
gen wt = weight/1000
regress mpg wt displacement foreign trunk headroom length
regsave using "~/Desktop/R_Stata/auto_results.dta", replace

rsource, terminator(END_OF_R) rpath("/usr/local/bin/R") roptions(`"--vanilla"')
// rsource using my_script.R, rpath(`"c:\r\R-3.5.1\bin\Rterm.exe"') roptions(`"--vanilla"')  // use this line instead if you run a windows box

library(tidyverse);     # collection of all-around useful R packages
library(haven);         # for importing Stata datasets
library(dotwhisker);    # easy and beautiful regression plots, imho

auto_results <- read_dta("~/Desktop/R_Stata/auto_results.dta") %>% 
    rename(term = var,
    estimate = coef,
    std.error = stderr) %>% 
    filter(term != "_cons");
dwplot(auto_results);
ggsave("~/Desktop/R_Stata/auto_results.png", width = 5, height = 4);

END_OF_R

Running Stata from R

So maybe you’ve gotten to the point where you spend more of your time in R than in Stata, but there’s still a few parts of your work that you just want (or need!) to keep in Stata. Running a do-file (my_do_file.do) from inside your R script is easy with Luca Braglia’s RStata package:

if (!require(RStata)) install.packages("RStata"); library(RStata) # this will install RStata if not already installed

stata("my_do_file.do", 
      stata.path = "/Applications/Stata/StataMP.app/Contents/MacOS/stata-mp", # yours probably differs: use the chooseStataBin() command on windows or linux machines; on Macs, right click on the Stata app, select "Show Package Contents", then see what's in the Contents/MacOS/ directory
      stata.version = 13)  # again, specify what _you_ have

On this side as well, it’s possible to set the arguments just once, in your .Rprofile file. In my case, these two lines do the trick:

options("RStata.StataPath" = "/Applications/Stata/StataMP.app/Contents/MacOS/stata-mp")
options("RStata.StataVersion" = 13)

Since Stata isn’t free and open-source, it’s even more likely that others will have different setups anyway, so this may make the most sense. Be sure to comment your code to clue people in, though.

If you just want to use a single Stata command RStata::stata3 will do that for you, too, with no need for a do-file. From the RStata package documentation:

library("RStata")
# remember to set RStata.StataPath & RStata.StataVersion in your .Rprofile first!  See https://www.rdocumentation.org/packages/RStata/

## Data input to Stata
x <- data.frame(a = rnorm(3), b = letters[1:3])
stata("sum a", data.in = x)                         
## . sum a
## 
##     Variable |       Obs        Mean    Std. Dev.       Min        Max
## -------------+--------------------------------------------------------
##            a |         3    .5294933    .5045189  -.0456111   .8975471
## Data output from Stata (e.g., obtain 'auto' dataset)
auto <- stata("sysuse auto", data.out = TRUE)
## . sysuse auto
## (1978 Automobile Data)
head(auto)
##            make price mpg rep78 headroom trunk weight length turn
## 1   AMC Concord  4099  22     3      2.5    11   2930    186   40
## 2     AMC Pacer  4749  17     3      3.0    11   3350    173   40
## 3    AMC Spirit  3799  22    NA      3.0    12   2640    168   35
## 4 Buick Century  4816  20     3      4.5    16   3250    196   40
## 5 Buick Electra  7827  15     4      4.0    20   4080    222   43
## 6 Buick LeSabre  5788  18     3      4.0    21   3670    218   43
##   displacement gear_ratio  foreign
## 1          121       3.58 Domestic
## 2          258       2.53 Domestic
## 3          121       3.08 Domestic
## 4          196       2.93 Domestic
## 5          350       2.41 Domestic
## 6          231       2.73 Domestic
## Data input/output
(y <- stata("replace a = 2", data.in = x, data.out = TRUE))
## . replace a = 2
## (3 real changes made)
##   a b
## 1 2 a
## 2 2 b
## 3 2 c

Summing Up

Moving parts of your work from Stata to R is totally feasible. Lots of people (for example, in the thread that touched this post off, Steve Rodgers) really want to take advantage of the superior graphics capabilities of R, especially the ggplot ecosystem, even while sticking to Stata for most of their work. Once your feet are wet, you may then decide R’s many other benefits (the free part, the super-helpful community, the transferable job skills you can teach your students, the free part, the cutting-edge stuff available years before it’s in Stata, the way RStudio makes it dead easy to do reproducible research through dynamic documents and version control, and, once again, the free part) make switching over all the way to be worth the additional marginal effort. Or you may not.

I completed the transition in three or four years, at my own pace: when I felt comfortable moving another chunk of my workflow over to R, I did, but not before. If I were doing it over right now, with the tidyverse packages dramatically reducing the slope of the learning curve, I might move faster, but there’s no rush, really. Do what works for you.

Additional Notes

  • This post by John Ricco describing how to translate Stata data cleaning commands to the dplyr idiom will likely be helpful to those new to tidyverse-style R and wanting to move quickly.
  • I also ran across the Rcall package while writing this up, but I haven’t tried it. You may find it useful.
  • OTOH, these 2010 slides by Oscar Torres-Reyna were definitely useful to me back in the day, but as they pre-date both the tidyverse and RStudio—the wonders of which really cannot be overstated—they’re now more likely to cause you unnecessary confusion than help you if you’re a new switcher. Better to steer clear.
  • Great complete treatments on how to do stuff in R:
  • RStudio’s Cheat Sheets are also great references.
  • When you’re ready to take the step to using R more than Stata, you’ll want to get fully set up on RStudio, which provides a front end for running R and can integrate with git and GitHub for version control (you will want this). The best resource that I’ve found for this process is Jenny Bryan’s Happy Git and GitHub for the UseR.
  • The R community on StackOverflow is full of helpful people. As your Google-fu develops, you’ll find that links to StackOverflow are most likely to get you where you need to go.
  • There are so many fantastic #rstats (dozens? hundreds?) follows on Twitter. With apologies to the—seriously—hundreds of others who’ve taught me tons of stuff over the years, I’m going to grit my teeth and rec just five to get you started: Mara Averick, Jenny Bryan, David Robinson, Julia Silge, and Hadley Wickham.

References

Bryan, Jenny. 2018. “Happy Git and Github for the useR.” http://happygitwithr.com/.

Chang, Winston. “Cookbook for R.” http://www.cookbook-r.com.

Ismay, Chester, and Albert Y. Kim. 2018. “Modern Dive: An Introduction to Statistical and Data Sciences via R.” https://moderndive.com/.

Kastellec, Jonathan P., and Eduardo L. Leoni. 2007. “Using Graphs Instead of Tables in Political Science.” Perspectives on Politics 5(4): 755–71.

Wickham, Hadley, and Garrett Grolemund. 2017. R for Data Science. O’Reilly. http://r4ds.had.co.nz.


  1. Then, for me, it was multiple imputation, parallel computation, and the dot-and-whisker plots of regression coefficients introduced to political science by Kastellec and Lioni (2007). On this last one, see also the dotwhisker package. Now my list is different, but even longer. That’s not what I want to get into in this post, though. This post is how, not why.

  2. See the technical note to the help file for rsource for details.

  3. In the argot (heh), this means the stata command in the RStata package.

SWIID Version 7.1 is available!

Tuesday, 14 August 2018

Version 7.1 of the SWIID is now available! In addition to important behind-the-scenes improvements to the estimation routine, this new release:

For more details, you can check out the all the R and Stan code used to generate the estimates in the SWIID GitHub repository. As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

SWIID Version 6.2 is available!

Monday, 26 March 2018

Version 6.2 of the SWIID is now available! Building on the end-to-end revision accomplished in Version 6.0 last July and the update Version 6.1 last October, this new release:

For more details, you can check out the all the R and Stan code used to generate the estimates in the SWIID GitHub repository. As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

SWIID Version 6.1 is available!

Friday, 27 October 2017

Version 6.1 of the SWIID is now available! Building on the end-to-end revision accomplished in Version 6.0 last July, this new release:

For more details, you can check out the all the R and Stan code used to generate the estimates in the SWIID GitHub repository. As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

The SWIID Source Data

Friday, 28 July 2017

Friday, 27 October 2017: Updated with information on the source data for SWIID Version 6.1

I have been producing the Standardized World Income Inequality Database for nearly a decade. Since 2008, the SWIID has provided estimates of the Gini index of income inequality1 for as many countries and years as possible and—given the primary goal of maximizing spatial and temporal coverage—these estimates are as comparable as the available data allow. The dataset has been used widely by academics, journalists, and policymakers. It’s been successful way beyond all my hopes.2 I’ve been adding to it, revising it, improving on it pretty much the entire time since its launch. Now, with the support of the NSF, I am scrapping all of that work and starting fresh. From scratch.

This is the first in a series of posts on how I did it. It focuses on an unheralded but foundational part of the SWIID project, the source data. The basic idea behind the SWIID is to start with the data that has been most carefully picked over to ensure its utmost cross-national comparability: the data of the fantastic Luxembourg Income Study. I’ve heard that generating a single country-year of LIS data takes an average of ten person-months of work. That’s dedication to comparability. ❤️ But the flipside of maximizing comparability is that the LIS’s coverage is pretty sparse: it includes fewer than 300 country-years at last count.3 To address this weakness, the SWIID routine estimates the relationships between Gini indices based on the LIS and all of the other Ginis available for the same country-years, then uses these relationships to estimate what the LIS Gini would be in country-years not included in the LIS but available from other sources.4 The critical first step to making this work is getting a lot of other, non-LIS Ginis. I call these other Ginis the SWIID’s source data. Over the years, I and my research assistants built up a big spreadsheet of data collected from international organizations, national statistical offices, and scholarly books and articles. But it seemed like whenever I checked over these source data, I would find that at least a few figures had been recently revised, or their source was seemingly no longer available, or (worst of all) they evidently had been entered incorrectly. So again: it’s time to start over from scratch.

To be included in the SWIID’s source data, observations need to encompass the entire population of a country without regard to age, location,5 or employment status.6 They need to have an identifiable welfare definition and equivalence scale (more on these below). Finally, because I want to be able to direct users to sources they can easily peruse themselves, observations need to be available online.7

Hand-entering data is tedious and error-prone work, so I automated as much of the process of data collection as practicable. Most international organizations and a few national statistical offices use APIs that facilitate downloading their data, and often the #rstats community has built R packages using these APIs to make the task even easier. I took as much advantage of these resources as possible.8 In the absence of an API, I scripted downloads of any available spreadsheets, preferring clean csv files to excel-formatted ones. If there was no spreadsheet, but data were available in pdf files, I automated downloading these files and then used Thomas Leeper’s tabulizer package to read the tables into R. In the absence of a file to download, I scripted the process of scraping the data from the web. Still, for a variety of reasons, a source’s data may have been consigned to being entered in a separate spreadsheet. Many sources contain just a handful or fewer observations, making the payoff to the often laborious process of data cleaning too small to justify the effort. Some sources–including most academic articles–are behind paywalls, making reproducibility a hassle anyway (though I still often used tabulizer to read the data from the pdf before cutting-and-pasting it into the spreadsheet). Some sources, like books, or the .doc file(!) generated annually by New Zealand’s Ministry of Social Development, cannot be read straight into R. Finally, at least one source contains crucial information encoded in the typeface(!!) of its tables, information lost when the tables are scanned into R. All of the entries in this spreadsheet were checked repeatedly for errors,9 and I excluded repeated reports of the exact same observation from different sources. In the end, I was able to automate the collection of more than three quarters of the source data and a much higher percentage of the series that will be updated or are subject to revision, facilitating incorporating these changes in future versions.

The resulting dataset comprises 14857(!) Gini coefficients from 2845 country-years in 192 countries or territories, making the coverage of the SWIID source data broader than that of any other income inequality dataset. This isn’t surprising given that, with the exceptions of the WIID (which, since it provides no original data, isn’t drawn on at all anymore) and the All the Ginis database (which provides little original data, and so isn’t drawn on much), the SWIID source data incorporates all of the data in these other datasets.

So, let’s check out what the source data look like. There is much more data available about the income distribution in some countries than in others. Which countries are most data-rich? The plot below shows the top dozen countries by the count of observations. Canada, by virtue of the excellent Statistics Canada as well as longstanding membership in the OECD and LIS, has 733 observations, many more than any other country. The United Kingdom and United States are next, followed by an interesting mix of countries from around the world with not surprisingly a sizable European representation. All are members of the LIS. On the other hand, eleven countries have only a single observation.

As we’ll see in later posts in this series, observations for the same country in the same year, but with different welfare definitions and equivalence scales or from different sources, are important to generating the SWIID’s cross-nationally comparable estimates. Still, we might be interested to know which countries have the most coverage of the years in the SWIID’s current 57-year timeframe, from 1960 to 2016, because the SWIID’s inequality estimates for countries with fewer country-year observations will include more interpolated values, which in turn will have more uncertainty.

The source data includes observations for Sweden and the United Kingdom in all but one of these years and for the United States in all but six. Argentina—a country not included in the LIS—makes the top 12, with 41 country-year observations. The median country has observations in just eleven different country-years.

We can also get a sense of the available inequality data by turning the question around and asking about coverage across countries over time. There are observations for 122 countries in 2005. Coverage is relatively good in the years from 2000 to 2014, at least 80 countries per year, before dropping to 82 countries for 2015 and just 58 for last year. Country coverage is pretty thin each year through the 1960s and 1970s and still isn’t all that great until the late 1980s.10

Earlier I mentioned that to be included in the SWIID source data observations need to have an identifiable welfare definition and equivalence scale. A welfare definition is an answer to the question, this Gini measures the distribution of what? The four welfare definitions employed in the SWIID source data are market income, gross income, disposable income, and consumption. Market income is defined as amount of money coming into the household, excluding any government cash or near-cash benefits, the so-called ‘pre-tax, pre-transfer income.’11 Gross income is the sum of market income and government transfer payments; it is ‘pre-tax, post-transfer income.’ Disposable income, in turn, is gross income minus direct taxes: ‘post-tax, post-transfer income.’12 Consumption does not refer to the money coming into the household at all but rather to the money going out.13 In the source data, Ginis of disposable income are much more common than those using other welfare definitions.

Equivalence scales are the ways in which the size and composition of a household is incorporated into the calculation of its members’ welfare. On the one hand, these factors can simply be ignored, with all households with the same amount of income or consumption treated as if they enjoy the same level of welfare, regardless of their size. One can improve on this household ‘scale’14 by dividing the household’s income by its number of members, that is, by using a per capita scale. But a household of two members and an income of \$100,000 is better off than one with a single member and \$50,000 due to economies of scale—that’s a big reason why people look for roommates. There are a variety of ways to try to account for these economies by calculating the number of “equivalent adults” in the household. Of the most commonly used adult-equivalent scales, the square-root scale is the most straightforward: one simply divides the household income by the square root of the number of members. The “OECD-modified” scale for the number of adult equivalents (which the OECD itself actually never used) counts the first adult as 1, all other adults as .5, and each child as .3. And there are plenty of other adult-equivalent scales, from the “old OECD” scale (1 for the first adult, 0.7 for each additional adult, and 0.5 for each child) to caloric-requirement-based scales (which are actually very nearly per capita, as it turns out) to a number of country-specific scales. In previous versions of the SWIID, all adult-equivalent scales were considered a single category. Now, the square-root scale and the OECD-modified scale have both been split out, leaving the remaining catch-all adult-equivalent category much smaller.

Differences in the welfare definition and the equivalence scale employed constitute the biggest source of incomparability across observations in the source data, and all twenty of the possible combinations are represented. I’ll take up how we get from these incomparable observations to the SWIID estimates in the next post. In the meantime, if you’d like to see the source data, you can download it from here.

References

Jesuit, David K., and Vincent A. Mahler. 2010. “Comparing Government Redistribution Across Countries: The Problem of Second-Order Effects.” Social Science Quarterly 91(5): 1390–1404.

Morgan, Jana, and Nathan J. Kelly. 2013. “Market Inequality and Redistribution in Latin America and the Caribbean.” Journal of Politics 75(3): 672–85.

Solt, Frederick. 2016. “The Standardized World Income Inequality Database.” Social Science Quarterly 97(5): 1267–81.


  1. I think the clearest explanation of the Gini index is that it is half the average difference in income between all pairs of units—say, households—as a percentage of the mean income of those units. Okay, I said “clearest,” not necessarily “clear.” Anyway, it has a theoretical range of 0 (all households have the same income) to 100 (one household has all the income and the rest have none), but Ginis below 20 or above 60 are rare in the real world. There are good reasons to prefer other measures of inequality, and there are many options, but the Gini is by far the most widely available.

  2. At the time, those hopes were admittedly concerned mostly with getting #Reviewer2 off my back so I could publish a series of manuscripts I had on how the context of inequality is related to people’s political attitudes.

  3. Which is what #Reviewer2 always complained about. R2: Shouldn’t you include Ruritania and Megalomania in your sample, given the broad applicability of your theory? Me: Yes, sure, but like I wrote in the paper, there’s no LIS data for those countries, and the other available data just isn’t comparable. R2: Well then, I recommend rejection. Me: Grr.

  4. If you’re thinking, “hey, multiple imputation for missing data,” cool, that’s what I was thinking too. If you’re thinking of poll aggregators and house effects, yep, it’s very similar. If you’re thinking of inequality as a latent variable, with a number of indicators of varying discrimination, that also works. If you’re thinking you need to look at some cat gifs right about meow, click here.

  5. The requirement for complete territorial coverage was relaxed for minor deviations such as data on Portugal that excludes Madeira and the Azores. It was relaxed somewhat further for early series that covered only the urban population of three highly urbanized countries: Uruguay, Argentina, and South Korea. The general rule, however, is that data is excluded if it measures the income distribution of only urban or rural populations, or of only selected cities, or some other such incomplete territory.

  6. This last requirement is new; it means nearly 600 observations on the distribution of wages across employed individuals that were included in the source data of previous versions of the SWIID are now excluded. Between the lack of information on those out of the workforce and on how workers formed households, these data weren’t very strongly related to the LIS anyway.

  7. For scholarly articles, I preferred DOIs or JSTOR stable URLs, but if those were unavailable I used the publisher website or another repository. For books, I provide the link to the relevant page in Google Books. There were two books that I decided I had to include for which Google Books wouldn’t show the relevant pages (at least not to me); in those two cases, the links I provide just go to the entire volume. I confirmed that the cited pages can be found using Amazon’s “Look Inside” feature, so I consider my “must be available online” rule only badly bent rather than completely broken.

  8. Although the sources with APIs were relatively few, they contained the most data: nearly half of the observations were collected this way.

  9. Which, of course, is not to say that they are error-free. If you spot any problems, or better still, know of sources I might have missed, please let me know!

  10. This is partly a result of my decision to insist on sources that are available online, but it’s just as well: so little information is available about many of the so-excluded observations on that era that I find it hard to have much confidence in them.

  11. It’s important, though, to not think of the distribution of market income as ‘pre-government.’ Beyond taxes and transfers, governments seeking to shape the distribution of income have a wide array of ‘market-conditioning’ or ‘predistribution’ policy options, with minimum wage regulation and labor policy two obvious examples (see, e.g., Morgan and Kelly 2013). Further, even taxes and transfers can profoundly shape the distribution of market income through ‘second-order effects.’ Where robust public pension programs exist, for example, people save less for retirement, leaving many of the elderly without market income in old age and so raising the level of market-income inequality (see, e.g., Jesuit and Mahler 2010).

  12. Note that disposable income still does not take into account, on the one hand, indirect taxes like sales taxes and VAT, or, on the other, public services and indirect government transfers such as price subsidies. There is very little information available about the distribution of such ‘final income,’ pretty much only that generated by the Commitment to Equity Institute, so I exclude it from the SWIID source data at least for the time being.

  13. In previous versions of the SWIID, market and gross income were treated as a single welfare definition, and I am glad to finally be able to split them apart (c.f., Solt 2016, 1272). The consumption welfare definition might now be the most heterogeneous within the SWIID source data, varying considerably in whether and how observations treat expenditures on durable goods. Another source of differences within a single welfare definition is the extent to which nonmonetary income—such as the value of food grown for the household’s own consumption or of housing that the owner occupies—is included. The SWIID source data include the variable monetary that indicates whether any nonmonetary income is taken into account, but at present this information is not incorporated into the classification of welfare definitions.

  14. Scare quotes because, strictly speaking, nothing is being scaled at all; it’s simply treating the household as the unit of analysis.

SWIID Version 6.0 is available!

Thursday, 27 July 2017

Version 6.0 of the SWIID is now available! It represents a complete, starting from scratch, end-to-end revision, with all the heavy lifting now done using #rstats and Stan.

## Warning: package 'dplyr' was built under R version 3.5.1

As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

Notes for Those New to Writing Peer Reviews

Friday, 14 April 2017

Today we had a workshop for our graduate students on writing peer reviews. Here are the notes I spoke from:

I get asked to do a lot of reviews. At the beginning of this semester, I got seven requests within two or three weeks. I used to always say yes, but doing 35 or 40 reviews a year just took too much time. When I was first starting out, I’d take something like six or eight hours on each review, though that pretty quickly got down to four or so. Nowadays it might even be a touch less, spread over two days. I like to give the paper a close read on one day, while taking notes and maybe doing a bit of research. Then the next day, I write up my review, after my thoughts have had a chance to percolate. Anyway, now I have a two-per-month rule to protect my time, though I sometimes break it: I took four out of those seven requests back in January.

I always start my reviews with a quick summary of the piece, but as reviewers, our focus should be on theory, data, and method. For the big-three journals, the old saw is that the standard is “new theory, new data, new method—choose any two,” but regardless of the journal that has asked you to review, for a work to make a contribution, it has to be sound—not new, not super-great, just sound—on all three. Here are a couple of quick notes on each, mostly of the points I find myself most often making:

  1. Theory: if you think that the authors1 have overlooked some plausible rival theory, be sure to explain and include specific citations. You don’t have to have a full bibliographic entry; author and year are probably enough, though I usually throw the journal abbreviation in too just to be sure. Reviews aren’t the place to develop your own new rival theory. If you’re really tempted to do so, plan instead on responding to this paper when it comes out in print.

  2. Data: do the authors take advantage of all available data? Probably not—we can’t all look at everything all the time—but if they’ve neglected obvious things: using, for example, just that oddball third wave of the WVS instead of all the waves, or if they have very little data and you know of other sources they can draw on, say so. Of course, if they use some source and you know that there’s other, better data available, point that out to them.

  3. Methods: First, are the methods appropriate? In answering this, you have to judge the methods on their own terms: NOT, oh, this study uses survey data, so tells us nothing about causality! OR this study just reports an experiment, so it has no external validity!

    • Are interaction terms properly constructed and interpreted?2 There’s software that makes this super-easy. Recommend it by name: “There’s grinter for Stata and interplot for R.” Include cites to Braumoeller’s (2004) IO article and/or Brambor, Clark, and Golder (2006) in PA, too.

    • Are the included controls appropriate? Do the authors include controls that are plausibly causally downstream from their IVs? That messes up results. Include citations. Conversely, do they exclude variables that would plausibly confound the relationship between the IV and the DV? It’s not enough to say that “oh, they didn’t control for X.” You have to explain why including X as a control is important. And again, include citations. You should also suggest a source of data for X. Btw, at least in my book, omitting a control—even one you’re totally sure is a source of spuriousness—is an R&R-able offense, not one that condemns the perpetrating manuscript to rejection. Give the authors a chance to show you you’re mistaken on this.

    • Are the coefficients and statistical significance interpreted correctly? Are the quantities of substantive interest, such as predicted probabilities, estimated and plotted? With their associated uncertainty?

    • Can any included tables be made into graphs? Probably, so be specific about what you have in mind, cite to Kastellec and Leoni’s (2007) Perspectives article, and maybe even give the dotwhisker package a shout-out if you think it’d help.

  4. Note what you’re NOT evaluating: the results themselves. Don’t filter on statistical significance: we need to avoid contributing to publication bias and the pressure way too many people apparently feel to p-hack their way to publication. And this should go without saying, but be sure to check your own presuppositions about what the results ‘should’ show at the door.

    • Nor the question asked. Don’t suggest that authors “reframe” their work around some similar (or not so similar) question. Don’t say that the question just isn’t important enough for the AJPS.3 If you’ve been in my classes, you’ve probably had me push you to ask important questions; you know I totally think that’s a big deal. But as a reviewer, as Justin Esarey argued in the TPM Special Issue on Peer Review, deciding whether the question asked was sufficiently important for publication isn’t your job. That’s for the editor maybe, but really it is for us all as a discipline, as readers.

    • Nor typos, grammar, or citation formatting. If it’s really, really bad, I’ll point out that it’s something the author should be sure to work on. But don’t send in a bunch of line edits. I will always note if I see that cited works are not included in the bibliography. BibTeX is your friend, people!

Finally and above all: take a considerate, helpful tone. Research is hard, and the peer-review process is difficult and frustrating for all of us. Contribute to the collective good by pitching your comments in a constructive tone that you would be happy to read in reviews of your own work. In other words, #BeReviewer1.4 Even if just about everything is going to need to be redone before the manuscript has any shot at publication—and you know, sometimes it really does—write that in a sympathetic way, remembering that there’s another person who will read your words about their work. And always find at least one good quality of the paper to highlight. Be sure to return to that point at the end of your review.


1 I’ve settled on always writing reviews with the assumption that the piece is co-authored and that the appropriate pronoun is therefore “they.”

2 This is point number one on Brendan Nyhan’s “Checklist Manifesto for Peer Review” in The Political Methodologist’s Special Issue on Peer Review. Read the whole issue!

3 OTOH, you should give people credit when they take on hard questions with less-than-ideal data and methods if those data and methods are (approximately) the best available.

4 Not that jerk, #Reviewer2. In addition to checking out #BeReviewer1 on Twitter, you should also be sure to read Thomas Leeper’s manifesto that started it all.

SWIID Version 5.1 is available!

Thursday, 21 July 2016

Version 5.1 of the SWIID is now available! It revises and updates the SWIID’s source data and estimates. It also includes expanded training modules explaining how to take into account the uncertainty in the estimates in both R and Stata.

As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

icpsrdata makes your ICPSR downloads reproducible!

Friday, 20 May 2016

If you download data from the Inter-university Consortium for Political and Social Research archive archive, you can make your research reproducible with the icpsrdata package, now available on CRAN.

Try pewdata!

Friday, 13 May 2016

If you use Pew Research Center surveys, you can make your research reproducible with the pewdata package now available on CRAN.

SWIID Wins NSF Support!

Monday, 10 August 2015

The National Science Foundation has awarded three years of support to update and improve the SWIID! Yay!

Use dotwhisker for your APSA slides!

Thursday, 30 July 2015

With the APSA coming up, and in the interest of minimizing the number of times we hear “sorry, I know you won’t really be able to see these regression coefficients,” I thought I’d point R users to dotwhisker, a package UI Ph.D. student Yue HU and I just published to CRAN. dotwhisker makes regression plots in the style of Kastellec and Leoni’s (2007) Perspectives article quick and easy: after data entry, just two lines of R code produced the easy-to-read-even-from-the-back-of-the-room plot attached to this post. I hope you’ll find it useful, and if you have any suggestions for us, that you’ll file an issue at https://github.com/fsolt/dotwhisker, tweet to me @fredericksolt, or just send me an email [email protected].

Download

Now on CRAN: interplot

Friday, 26 June 2015

Hu Yue and I just published interplot on CRAN, our first R package. interplot makes graphing the coefficients of variables in interaction terms easy. It outputs ggplot objects, so further customization is simple. Check out the vignette and give it a try!

Download

Inequality in China

Friday, 27 March 2015

A new working paper by IMF researchers Serhan Cevik and Carolina Correa-Caro observes that sharply rising inequality has made China one of the most unequal countries in the world. Here’s a graph of SWIIDv5.0 data that illustrates their point.

Download

SWIID Version 5.0 is available!

Thursday, 2 October 2014

Version 5.0 of the SWIID is now available, and it is a major update. A new article of record (currently available as a working paper while under peer review) reviews the problem of comparability in cross-national income inequality data, explains how the SWIID addresses the issue, assesses the SWIID’s performance in comparison to the available alternatives, and explains how to use the SWIID data in cross-national analyses.

The new version also marks the debut of the SWIID web application. The web application allows users to graph the SWIID estimates of any of net-income income, market-income inequality, relative redistribution, or absolute redistribution in as many as four countries or to compare these measures within a single country. Its output can be downloaded with a click for use in reports or articles. I hope that it will be of particular value to policymakers, journalists, students, and others who need to make straightforward comparisons of levels and trends in income inequality.

As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

SWIID Version 4.0 is available!

Monday, 30 September 2013

Version 4.0 of the SWIID is now available here. Drawing on nearly 14,000 Gini observations in more than 3100 country-years, this version provides even better estimates of income inequality in countries around the world than in previous versions.

This version introduces two other improvements. First, many users have had trouble making appropriate use of the standard errors associated with the SWIID estimates. The uncertainty, however, can sometimes be substantial, making it crucial to incorporate in one’s analyses. Fortunately, there are now tools in Stata and R that make it quite straightforward to analyze data that is measured with error, and this version of the SWIID includes files that are pre-formatted for use with these tools. The file “Using the SWIID.pdf”, which is also included in the data download, explains how. Some additional examples of using the SWIID with Stata’s mi estimate command prefix can be found towards the end of the slides posted here.

Second, I’ve received several requests for measures of top income share, so in this version I am including estimates of the top 1 percent’s share (the variable share1), standardized to the data provided in the World Top Incomes Database: Country-years included in that dataset are reproduced without modification in the SWIID, and comparable figures for other country-years are estimated using the SWIID’s custom multiple-imputation algorithm. Like all inequality datasets, Top Incomes has tradeoffs—among other things, the share of pre-tax, pre-transfer income reported on tax returns by the richest filers may not be of much theoretical interest to many investigators—but the additional estimates the SWIID provides may prove to be useful to some.

I encourage users of the SWIID to email me with their comments, questions, and suggestions.

My talk at the UN

Sunday, 29 September 2013

Earlier this month, I gave a talk previewing Version 4.0 of the SWIID to the Development Policy and Analysis Division of the United Nations’ Department of Economic and Social Affairs. I had some great conversations and got lots of useful feedback. Slides for the talk can be found here.

SWIID Version 3.1 now available!

Monday, 2 January 2012

Version 3.1 of the SWIID is now available here. The primary difference introduced in Version 3.1 is that the data on which the SWIID is based have again been expanded. Now nearly 4500 Gini observations are added to those collected in the UNU-WIDER data, and for many countries the available data extend to 2010. Also, I made one semantic change: to try to avoid confusion among those who neglect to read about the data they use, the series on pre-tax, pre-transfer inequality is now labeled gini_market rather than gini_gross. Otherwise, very small revisions were made to the SWIID routine from Version 3.0. As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

SWIID Version 3.0 is now available!

Sunday, 11 July 2010

Version 3.0 of the SWIID is now available, with expanded coverage and improved estimates.

The data on which the SWIID is based have been expanded. I have collected another 2100 Gini observations (in addition to the 1500 added in v2.0), again with special attention to addressing the thinner spots in the WIID. As before, these data are available in the replication materials for those who are interested. Major sources for these data include the World Bank’s Povcalnet, the Socio-Economic Database for Latin America, Branko Milanovic’s World Income Distribution data (“All the Ginis”), and the ILO’s Household Income and Expenditure Statistics, but a multitude of national statistical offices and other sources were also consulted.

The SWIID also now incorporates the University of Texas Inequality Project’s UTIP-UNIDO dataset on differences in pay across industrial sectors. Across countries and years, these data explain only about half of the variation in net income inequality (and much less of gross income inequality) and so yield predictions with prohibitively large standard errors when employed in this way, but where there was sufficient data available, I used the UTIP data to make within-country loess predictions of both net and gross income inequality that informed the SWIID estimates.

The imputation routine used for generating the SWIID was cleaned up: the code now runs more efficiently, and a few errors were corrected.

Many researchers have asked me about using the SWIID to examine questions of redistribution, so I now include in the dataset the percentage reduction in gross income inequality (that is, the difference between the gross and net income inequality, divided by gross income inequality, multiplied by 100) as an estimate of redistribution (“redist”) as well as its associated standard error (“redist_se”). The standard errors for redistribution are particularly important to take into account, as they can often be quite large relative to the size of the estimates. Observations for redistribution are omitted for countries for which the source data do not include multiple observations of either net or gross income inequality: in such cases, although the two inequality series each still constitute the most comparable available estimates, the difference between them reflects only information from other countries, and treating it as meaningful independent information about redistribution would be unwise. Similarly, because the underlying data is often thin in the early years included in the SWIID, redistribution is only reported after 1975 for most of the advanced countries and only after 1985 for most countries in the developing world.

As always, I encourage users of the SWIID to email me with their comments, questions, and suggestions.

Using the SWIID Standard Errors

Sunday, 20 June 2010

Incorporating the standard errors in the SWIID estimates into one’s analyses is the right thing to do, but it is not a trivial exercise. I myself have left it out of some work where I felt the model was already maxed out on complexity (though in such cases, I advise at least excluding observations with particularly large errors). The short story is that one generates a bunch of Monte Carlo simulations of the SWIID data from the estimates and standard errors, then analyses each simulation, then combines the results of the multiple analyses as one would in a multiple-imputation setup (this should be easier to do with Stata 11’s new multiple-imputation tools, but I won’t get my copy of Stata 11 until the fall–oh well). The code below does the trick.

**Using the SWIID Standard Errors: An Example**
//Load SWIID and generate fake data for example
use "SWIIDv2_0.dta", clear
set seed 4533160
gen x1 = 20*rnormal()
gen x2 = rnormal()
gen x3 = 3*rnormal()
gen y = .03*x1 + 3*x2 + .5*x3 + .05*gini_net + 5 + 20*rnormal()
reg y x1 x2 x3 gini_net

//Generate ten Monte Carlo simulations of the gini_net series
egen ccode=group(country)               
tsset ccode year                        
set seed 3166                           
forvalues a = 1/10 {
    gen e0 = rnormal()
    quietly tssmooth ma e00 = e0, weight (1 1 <2> 1 1)
    quietly sum e00
    quietly gen g`a'=gini_net+e00*(1/r(sd))*gini_net_se
    drop e0 e00
}

//Perform analysis using each of the ten simulations, saving the results
local other_ivs = "x1 x2 x3"        /*to be replaced with your other IVs, that is, not including gini_net or the constant*/
local n_ivs = 5             /*to be replaced with the number of IVs, now *including* gini_net and the constant*/
matrix coef = J(`n_ivs', 10, -99)
matrix se = J(`n_ivs', 10, -99)
matrix r_sq = J(1, 10, -99)
forvalues a = 1/10 {
    quietly reg y `other_ivs' g`a'  /*to be replaced with your analysis*/   
    matrix coef[1,`a'] = e(b)'
    matrix A = e(V)
    forvalues b = 1/`n_ivs' {
            matrix se[`b', `a'] = (A[`b',`b'])
    }
    matrix r_sq[1, `a'] = e(r2)
}       

local cases = e(N)

svmat coef, names(coef)
svmat se, names(se)
svmat r_sq, names(r_sq)


//Display results across all simulations
egen coef_all = rowmean(coef1-coef10)

gen ss_all = 0
forvalues a = 1/10 {
    quietly replace ss_all = ss_all + (coef`a'-coef_all)^2
}
egen se_all = rowmean(se1-se10)
replace se_all = se_all + (((1+(1/10)) * ((1/9) * ss_all))) /*Total variance, per Rubin (1987)*/
replace se_all = (se_all)^.5 /*Total standard error*/

gen t_all = coef_all/se_all
gen p_all = 2*normal(-abs(t_all))

egen r_sq_all = rowmean(r_sq1-r_sq10)

gen vars = " " in 1/`n_ivs'
local i = 0
foreach iv in `other_ivs' "Inequality" "Constant" {
    local i = `i'+1
    replace vars = "`iv'" in `i'
}
mkmat coef_all se_all p_all if coef_all~=., matrix(res_all) rownames(vars)
matrix list res_all, format(%9.3f)
quietly sum r_sq_all
local r2 = round(`r(mean)', .001)
di "R-sq = `r2'"
di "N = `cases'"

Please feel free to drop me an email if you have any questions or comments.

SWIID Version 2.0

Friday, 31 July 2009

Version 2.0 of the SWIID is now available, and it is a major upgrade. It introduces two important changes from Version 1.1 (the version described in the SSQ article). First, I collected a large number (1500+) of Gini observations that are excluded from the WIID with an eye towards addressing some of the thinner spots in the SWIID’s underlying data. Second, I rewrote several parts of the missing-data algorithm. The key change is a switch from multilevel to (flat) linear regression modeling for the imputation of conversion ratios between the 21 categories of available Gini data. Given the patterns of missingness in the data, complete pooling (as occurs in a flat linear regression) proved superior to partial pooling (as occurs in multilevel modeling). The result, along with some minor improvements in coverage, is considerably smaller standard errors in the Gini index estimates, particularly in Latin America and Africa, than in Version 1.1. All SWIID users are encouraged to use these new data in their work.

SWIID Version 1.1

Sunday, 12 October 2008

So much for version control. With apologies to v1.0 users, Version 1.1 is the SWIID as reported in “Standardizing the World Income Inequality Database.”

SWIID Version 1.0

Saturday, 13 September 2008

“Standardizing the World Income Inequality Database” has been accepted for publication in the Social Science Quarterly. Version 0.9 of the SWIID is now released as Version 1.0 without modification.

SWIID Version 0.9

Tuesday, 5 August 2008

The SWIID is currently undergoing peer review for publication.