• English
    • svenska
  • English 
    • English
    • svenska
  • Login
View Item 
  •   Home
  • Faculty of Social Science / Samhällsvetenskapliga fakulteten
  • Department of Political Science / Statsvetenskapliga institutionen
  • Working Papers/Books /Department of Political Science / Statsvetenskapliga institutionen
  • View Item
  •   Home
  • Faculty of Social Science / Samhällsvetenskapliga fakulteten
  • Department of Political Science / Statsvetenskapliga institutionen
  • Working Papers/Books /Department of Political Science / Statsvetenskapliga institutionen
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

How and How Much Does Expert Error Matter? Implications for Quantitative Peace Research

Abstract
Expert-coded datasets provide scholars with otherwise unavailable cross-national longitudinal data on important concepts. However, expert coders vary in their reliability and scale perception, potentially resulting in substantial measurement error; this variation may correlate with outcomes of interest, biasing results in analyses that use these data. This latter concern is particularly acute for key concepts in peace research. In this article, I describe potential sources of expert error, focusing on the measurement of identity-based discrimination. I then use expert-coded data on identity-based discrimination to examine 1) the implications of measurement error for quantitative analyses that use expert-coded data, and 2) the degree to which different techniques for aggregating these data ameliorate these issues. To do so, I simulate data with different forms and levels of expert error and regress conflict onset on different aggregations of these data. These analyses yield two important results. First, almost all aggregations show a positive relationship between identity-based discrimination and conflict onset consistently across simulations, in line with the assumed true relationship between the concept and outcome. Second, different aggregation techniques vary in their substantive robustness beyond directionality. A structural equation model provides the most consistently robust estimates, while both the point estimates from an Item Response Theory (IRT) model and the average over expert codings provide similar and relatively robust estimates in most simulations. The median over expert codings and a naive multiple imputation technique yield the least robust estimates.
URI
http://hdl.handle.net/2077/59412
Collections
  • Working Papers/Books /Department of Political Science / Statsvetenskapliga institutionen
View/Open
gupea_2077_59412_1.pdf (2.028Mb)
Date
2019
Author
Marquardt, Kyle L.
Series/Report no.
Working Papers
2019:84
Language
eng
Metadata
Show full item record

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV
 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

LoginRegister

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV