Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Marquardt, Kyle L."

Filter results by typing the first few letters
Now showing 1 - 12 of 12
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Constraining Governments: New indices of vertical, horizontal and diagonal accountability
    (2017) Lührmann, Anna; Marquardt, Kyle L.; Mechkova, Valeriya; V-Dem Institute
    Accountability - constraints on the government’s use of political power - is one of the cornerstones of good governance. However, conceptual stretching and a lack of reliable measures have limited cross-national research and comparisons regarding the role of both accountability writ large and its different sub-types. To address this research gap, we use the V-Dem dataset and Bayesian statistical models to develop new ways to conceptualize and measure accountability and its core dimensions. We provide indices capturing the extent to which governments are accountable to citizens (vertical accountability), other state institutions (horizontal accountability) and the media and civil society (diagonal accountability), as well as an aggregate index that incorporates the three sub-types. These indices cover virtually all countries from 1900 to today. We demonstrate the validity of our new measures by analyzing trends from key countries, as well as by demonstrating that the measures are positively related to development outcomes such as health and education.
  • No Thumbnail Available
    Item
    Estimating Latent Traits from Expert Surveys: An Analysis of Sensitivity to Data Generating Process
    (2018) Marquardt, Kyle L.; Pemstein, Daniel; V-Dem Institute
    Models for converting expert-coded data to point estimates of latent concepts assume different data-generating processes. In this paper, we simulate ecologically-valid data according to different assumptions, and examine the degree to which common methods for aggregating expert-coded data can recover true values and construct appropriate coverage intervals from these data. We find that hierarchical latent variable models and the bootstrapped mean perform similarly when variation in reliability and scale perception is low; latent variable techniques outperform the mean when variation is high. Hierarchical A-M and IRT models generally perform similarly, though IRT models are often more likely to include true values within their coverage intervals. The median and non-hierarchical latent variable modeling techniques perform poorly under most assumed data generating processes.
  • No Thumbnail Available
    Item
    Experts, Coders, and Crowds: An analysis of substitutability
    (2017) Marquardt, Kyle L.; Pemstein, Daniel; Sanhueza Petrarca, Constanza; Seim, Brigitte; Wilson, Steven Lloyd; Bernhard, Michael; Coppedge, Michael; Lindberg, Staffan I.; V-Dem Institute
    Recent work suggests that crowd workers can replace experts and trained coders in common coding tasks. However, while many political science applications require coders to both and relevant information and provide judgment, current studies focus on a limited domain in which experts provide text for crowd workers to code. To address potential over-generalization, we introduce a typology of data producing actors - experts, coders, and crowds - and hypothesize factors which affect crowd-expert substitutability. We use this typology to guide a comparison of data from crowdsourced and expert surveys. Our results provide sharp scope conditions for the substitutability of crowd workers: when coding tasks require contextual and conceptual knowledge, crowds produce substantively dierent data from coders and experts. We also find that crowd workers can cost more than experts in the context of cross-national panels, and that one purported advantage of crowdsourcing - replicability - is undercut by an insucient number of crowd workers.
  • No Thumbnail Available
    Item
    How and How Much Does Expert Error Matter? Implications for Quantitative Peace Research
    (2019) Marquardt, Kyle L.; V-Dem Institute
    Expert-coded datasets provide scholars with otherwise unavailable cross-national longitudinal data on important concepts. However, expert coders vary in their reliability and scale perception, potentially resulting in substantial measurement error; this variation may correlate with outcomes of interest, biasing results in analyses that use these data. This latter concern is particularly acute for key concepts in peace research. In this article, I describe potential sources of expert error, focusing on the measurement of identity-based discrimination. I then use expert-coded data on identity-based discrimination to examine 1) the implications of measurement error for quantitative analyses that use expert-coded data, and 2) the degree to which different techniques for aggregating these data ameliorate these issues. To do so, I simulate data with different forms and levels of expert error and regress conflict onset on different aggregations of these data. These analyses yield two important results. First, almost all aggregations show a positive relationship between identity-based discrimination and conflict onset consistently across simulations, in line with the assumed true relationship between the concept and outcome. Second, different aggregation techniques vary in their substantive robustness beyond directionality. A structural equation model provides the most consistently robust estimates, while both the point estimates from an Item Response Theory (IRT) model and the average over expert codings provide similar and relatively robust estimates in most simulations. The median over expert codings and a naive multiple imputation technique yield the least robust estimates.
  • No Thumbnail Available
    Item
    Introducing the Historical Varieties of Democracy Dataset: Political Institutions in the Long 19th Century
    (2018) Knutsen, Carl Henrik; Teorell, Jan; Cornell, Agnes; Gerring, John; Gjerløw, Haakon; Skaaning, Svend-Erik; Wig, Tore; Ziblatt, Daniel; Marquardt, Kyle L.; Pemstein, Dan; Seim, Brigitte; V-Dem Institute
    The Historical Varieties of Democracy Dataset (Historical V-Dem) is a new dataset containing about 260 indicators, both factual and evaluative, describing various aspects of political regimes and state institutions. The dataset covers 91 polities globally – including most large, sovereign states, as well as some semi-sovereign entities and large colonies – from 1789 to 1920 for many cases. The majority of the indicators are also included in the Varieties of Democracy dataset, which covers the period from 1900 to the present – and together these two datasets cover the bulk of “modern history”. Historical V-Dem also includes several new indicators, covering features that are pertinent for 19th century polities. We describe the data, the process of coding, and the different strategies employed in Historical V-Dem to cope with issues of reliability and validity and ensure inter-temporal- and cross-country comparability. To illustrate the potential uses of the dataset we provide a descriptive account of patterns of democratization in the “long 19th century.” Finally, we perform an empirical investigation of how inter-state war relates to subsequent democratization.
  • No Thumbnail Available
    Item
    IRT models for expert-coded panel data
    (2017) Marquardt, Kyle L.; Pemstein, Daniel; V-Dem Institute
    Data sets quantifying phenomena of social-scientific interest often use multiple experts to code latent concepts. While it remains standard practice to report the average score across experts, experts likely vary in both their expertise and their interpretation of question scales. As a result, the mean may be an inaccurate statistic. Item-response theory (IRT) models provide an intuitive method for taking these forms of expert disagreement into account when aggregating ordinal ratings produced by experts, but they have rarely been applied to cross- national expert-coded panel data. In this article, we investigate the utility of IRT models for aggregating expert-coded data by comparing the performance of various IRT models to the standard practice of reporting average expert codes, using both real and simulated data. Specifically, we use expert-coded cross-national panel data from the V–Dem data set to both conduct real-data comparisons and inform ecologically-motivated simulation studies. We find that IRT approaches outperform simple averages when experts vary in reliability and exhibit di↵erential item functioning (DIF). IRT models are also generally robust even in the absence of simulated DIF or varying expert reliability. Our findings suggest that producers of cross-national data sets should adopt IRT techniques to aggregate expert-coded data of latent concepts.
  • No Thumbnail Available
    Item
    Measuring Politically-relevant Ientity, With and Without Groups
    (2021-03) Marquardt, Kyle L.; V-Dem Institute
    Quantitative scholarship on civil conflict still largely relies upon the ethnic group as the foundation for measures of politically-relevant diversity and, in particular, identity-based political inclusion. However, ethnicity remains notoriously difficult to measure: even cutting-edge analyses are subject to the issues of intra- and inter-ethnic variation in identity salience that plagued earlier work. Here I propose a new way to measure identity-based exclusion. Specifically, I use latent variable models to combine data from both the Ethnic Power Relations Project, which uses the demographic size of politically-relevant ethnic groups to operationalize inclusion; and the Varieties of Democracy Project, which measures overall identity-based inclusion without directly accounting for demographic group size. The latent variable models combine insights from both measurement ap-proaches, ameliorating concerns about using either strategy in isolation. In addition to providing cross-nationally cohesive data on identity-based exclusion for future work, these models provide a framework for scholars to build their own theoretically-driven models of politically-relevant diversity and inclusion.
  • No Thumbnail Available
    Item
    The V–Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data
    (2022-03) Pemstein, Daniel; Marquardt, Kyle L.; Tzelgov, Eitan; Wang, Yi-ting; Medzihorsky, Juraj; Krusell, Joshua; Miri, Farhad; Römer, Johannes von; V-Dem Institute
    The Varieties of Democracy (V–Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent—that is, not directly observable—regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require systematic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.
  • No Thumbnail Available
    Item
    The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data
    (2015) Pemstein, Daniel; Marquardt, Kyle L.; Tzelgov, Eitan; Wang, Yi-ting; Miri, Farhad; V-Dem Institute
    The Varieties of Democracy (V–Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent—that is, not directly observable— regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require system- atic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.
  • No Thumbnail Available
    Item
    The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data
    (2020-03) Pemstein, Daniel; Marquardt, Kyle L.; Tzelgov, Eitan; Wang, Yi-ting; Medzihorsky, Juraj; Krusell, Joshua; Miri, Farhad; von Römer, Johannes; V-Dem Institute
    The Varieties of Democracy (V-Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent- that is, not directly observable- regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require systematic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.
  • No Thumbnail Available
    Item
    The V-Dem Method for Aggregating Expert-Coded Data
    (2018) Maxwell, Laura; Marquardt, Kyle L.; Lührmann, Anna; V-Dem Institute
  • No Thumbnail Available
    Item
    What Makes Experts Reliable?
    (2018) Marquardt, Kyle L.; Pemstein, Daniel; Seim, Brigitte; Wang, Yi-ting; V-Dem Institute
    Many datasets use experts to code latent quantities of interest. However, scholars have not explored either the factors affecting expert reliability or the degree to which these factors influence estimates of latent concepts. Here we systematically analyze potential correlates of expert reliability using six randomly selected variables from a cross-national panel dataset, V-Dem v8. The V-Dem project includes a diverse group of over 3,000 experts and uses an IRT model to incorporate variation in both expert reliability and scale perception into its data aggregation process. In the process, the IRT model produces an estimate of expert reliability, which affects the relative contribution of an expert to the model. We examine a variety of factors that could correlate with reliability, and find little evidence of theoretically-untenable bias due to expert characteristics. On the other hand, there is evidence that attentive and condent experts who have a basic contextual knowledge of the concept of democracy are more reliable.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback