Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Miri, Farhad"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Competition in the American Mutual Fund Industry. An empirical study
    (2014-12-15) Miri, Farhad; University of Gothenburg/Graduate School; Göteborgs universitet/Graduate School
    This thesis investigates and compares the relationship between the inflow of new investment into open-end equity U.S. mutual funds and their historical performance among the top performer funds. Using a piecewise linear regression and applying the Fama and MacBeth (1973) two stages estimation method on the fund data over the period between January 2004 and December 2014, it was found that the level of convexity within top performer is more extreme than what is usually observed as a convexity in the flow-performance relation among the whole industry players. The difference is irrespective of the performance measurement and is both statistically and economically significant. The results obtained suggest that the competition among the mutual funds is not just about being better than average but is rather about winning the “competition”. Fund managers can achieve marked additional inflow related to their peers by securing their position among top 10% of the industry in terms of performance. A positive significant relation between Morningstar rating and fund flow, was also documented.
  • No Thumbnail Available
    Item
    The V–Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data
    (2022-03) Pemstein, Daniel; Marquardt, Kyle L.; Tzelgov, Eitan; Wang, Yi-ting; Medzihorsky, Juraj; Krusell, Joshua; Miri, Farhad; Römer, Johannes von; V-Dem Institute
    The Varieties of Democracy (V–Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent—that is, not directly observable—regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require systematic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.
  • No Thumbnail Available
    Item
    The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data
    (2015) Pemstein, Daniel; Marquardt, Kyle L.; Tzelgov, Eitan; Wang, Yi-ting; Miri, Farhad; V-Dem Institute
    The Varieties of Democracy (V–Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent—that is, not directly observable— regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require system- atic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.
  • No Thumbnail Available
    Item
    The V-Dem Measurement Model: Latent Variable Analysis for Cross-National and Cross-Temporal Expert-Coded Data
    (2020-03) Pemstein, Daniel; Marquardt, Kyle L.; Tzelgov, Eitan; Wang, Yi-ting; Medzihorsky, Juraj; Krusell, Joshua; Miri, Farhad; von Römer, Johannes; V-Dem Institute
    The Varieties of Democracy (V-Dem) project relies on country experts who code a host of ordinal variables, providing subjective ratings of latent- that is, not directly observable- regime characteristics over time. Sets of around five experts rate each case (country-year observation), and each of these raters works independently. Since raters may diverge in their coding because of either differences of opinion or mistakes, we require systematic tools with which to model these patterns of disagreement. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these point estimates. In this paper we describe item response theory models that can that account and adjust for differential item functioning (i.e. differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e. random error). We also discuss key challenges specific to applying item response theory to expert-coded cross-national panel data, explain the approaches that we use to address these challenges, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the different forms in which we present model output.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback