Browsing by Author "Jonsson, Robert"
Now showing 1 - 14 of 14
- Results Per Page
- Sort Options
Item A CUSUM PROCEDURE FOR DETECTION OF OUTBREAKS IN POISSON DISTRIBUTED MEDICAL HEALTH EVENTS(University of Gothenburg, 2010-11-02) Jonsson, RobertCUSUM procedures which are based on standardized statistics are often supposed to have expectation zero and being normally distributed. If these conditions are not satisfied it can have serious consequences on the determination of proper alarming bounds and on the frequency of false alarms. Here a CUSUM method for detecting outbreaks in health events is presented when the latter are Poisson distributed. It is based on a standardized statistic with a bias from zero that can be neglected. The alarming boundaries are determined from the actual distribution of the statistic rather than on normality assumptions. The boundaries are also determined from requirements on the probability of false alarms instead of the common practice to focus on average run lengths (ARLs). The new method is compared with other CUSUM methods in Monte Carlo simulations. It is found that the new method has about the same expected time to first motivated alarm and the same sensitivity. However, the new method has expected times to first false alarm that are 9 % – 90 % longer. The new method is applied to outbreaks of sick-listening and to outbreaks of Chlamydial infection.Item A Markov Chain Model for Analysing theProgression of Patient´sHealth States(2011) Jonsson, RobertMarkov chains (MCs) have been used to study how the health states of patients are progressing in time. With few exceptions the studies have been based on the questionable assumptions that the MC has order m=1 and is homogeneous in time. In this paper a three-state non-homogeneous MC model is introduced that allows m to vary. It is demonstrated how wrong assumptions about homogeneity and about the value of m can invalidate predictions of future health states. This can in turn seriously bias a cost-benefit analysis when costs are attached to the predicted outcomes. The present paper only considers problems connected with model construction and estimation. Problems of testing for a proper value of m and of homogeneity is treated in a subsequent paper. Data of work resumption among sick-listed women and men are used to illustrate the theory. A nonhomogeneous MC with m = 2 was well fitted to data for both sexes. The essential difference between the rehabilitation processes for the two sexes was that men had a higher chance to move from the intermediate health state to the state ‘healthy’, while women tended to remain in the intermediate state for a longer time.Item Att välja den vinnande sponsringen - för både företag och samhälle. Hur sponsorer väljer elitklubbar att sponsra samt hur de mäter effekterna(2024-07-04) Jonsson, Robert; Magnusson, Linus; University of Gothenburg/Department of Business Administration; Göteborgs universitet/Företagsekonomiska institutionenSponsring har blivit en av de vanligaste marknadsföringsmetoderna och fortsätter att växa i popularitet. Sponsring innebär att ett företag köper rätten att associera sig med en rättighet, till exempel en idrottsklubb. Anledningen till dess popularitet kan ha att göra med dess varumärkesbyggande egenskaper. Tidigare forskning visar att sponsring kan förbättra företagets image, öka försäljning och skapa en stolthet hos medarbetarna, vilket i sin tur är motiverande för dem. Att sponsra professionella idrottslag kan även förbättra sponsorns CSR-image. Dock kan misslyckad sponsring innebära att företaget istället ser negativa effekter som till exempel bojkotter och/eller skadat rykte. Att välja rätt klubb att sponsra blir därför viktigt för att få positiva effekter såväl som att undvika negativa effekter. Att dessutom ha flera alternativ, som det finns i Göteborg, kan göra valet ännu svårare. Från tidigare forskning framkommer det att det finns utmaningar med att mäta effekter av sponsringen och att siffrorna kan variera beroende på vilken mätmetod som används. Det primära syftet med denna studie är att undersöka och öka förståelsen för hur huvudsponsorer av elitklubbar i Göteborg som spelar i högstaligan väljer ändamålsenliga föreningar att sponsra. Det sekundära syftet är att undersöka hur de mäter effekterna av sponsringen. Frågeställningarna är (1) Hur väljer huvudsponsorer i Göteborg ändamålsenliga elitklubbar att sponsra för att uppnå önskvärda effekter? (2) Hur mäter huvudsponsorerna i denna studie effekterna av sin sponsring? Arbetet avgränsas genom att undersöka enbart lagidrotter, sponsorer av elitklubbar i högsta ligan i Göteborg samt att de ska vara huvudsponsorer. För att svara på frågeställningarna användes en kvalitativ ansats med hjälp av fem semistrukturerade intervjuer med fyra sponsorer och en kompletterande intervju med en expert. Därefter gjordes en tematisk analys. Denna studie har kommit fram till att det som ligger till grund för att välja klubb att sponsra är företagets policy och strategi, klubbens nätverk, beslutsfattarens känslor och erfarenhet samt klubbens CSR-arbete. Den sistnämnda är det viktigaste kriteriet för samtliga sponsorer i studien. Kriterierna för att välja klubb stämde överens med tidigare forskning, men att CSR-arbetet var det viktigaste är unikt för denna studie. För att mäta effekter får sponsorerna rapporter till sig från mätföretag och klubbarna själva. De är intresserade av att veta hur de synts, hur många som känner till att de är sponsorer och att öka försäljningen. Dock finns det en avsaknad på specifika mått som används av sponsorerna i denna studie. De gör egna mätningar på till exempel CSR och hållbarhet, men inga specifika mått nämns. Den tidigare forskningen på effektmätning av sponsring stämmer delvis överens med sponsorerna i denna studie. Däremot upplever samtliga sponsorer i denna studie att det är svårt att mäta positiva finansiella effekter av sponsring. De nämner däremot inte hur de mäter sina varumärken eller effekten av CSR.Item Bayes prediction of binary outcomes based on correlated discrete predictors.(University of Gothenburg, 2002-03-01) Jonsson, Robert; Persson, AndersAn approach based on Bayes theorem is proposed for predicting the binary outcomes X = 0, 1, given that a vector of predictors Z has taken the value z. It is assumed that Z can be decomposed into 9 independent vectors given X = 1 and h independent vectors given X = 0. First, point and interval estimators are derived for the target probability P (X = 1|z). In a second step these estimators are used to predict the outcomes for new subjects chosen from the same population. Sample sizes needed to achieve reliable estimates of the target probability in the first step are suggested, as well as sample sizes needed to get stable estimates of the predictive values in the second step_ It is also shown that the effects of ignoring correlations between the predictors can be serious. The results are illustrated on Swedish data of work resumption among long-term sick-listed individuals.Item EXACT PROPERTIES OF McNEMAR'S TEST IN SMALL SAMPLES(University of Gothenburg, 1993-02-01) Jonsson, RobertThe exact distribution of McNemar's test statistic is used to determine critical pOints for two-sided tests of equality of marginal proportions in the correlated 2x2 table. The result is a conservative unconditional test which reduces to the conditional binomial test as a special case. Exact critical points are given for the significance levels 0.05, 0.01 and 0.001 with the sample sizes n=6(1)50. A computer program for tail probabilities makes the calculation of power easy. It is concluded that McNemar's test is never inferior to the conditional binomial test and that much can be gained by using the McNemar test if the main purpose is to detect differences between the marginal proportions in small samples. A further conclusion is that the chi-square approximation of McNemar's test statistic may be inadequate when n<=50. Especially the 5% critical points are constantly too small.Item Maximum Likelihood Ratio based small-sample tests for random coefficients in linear regression(2003) Jonsson, Robert; Petzold, Max; Department of EconomicsTwo small-sample tests for random coefficients in linear regression are derived from the Maximum Likelihood Ratio. The first test has previously been proposed for testing equality of fixed effects, but is here shown to be suitable also for random coefficients. The second test is based on the multiple coefficient of determination from regressing the observed subject means on the estimated slopes. The properties and relations of the tests are examined in detail, followed by a simulation study of the power functions. The two tests are found to complement each other depending on the study design: The first test is preferred for a large number of observations from a small number of subjects, and the second test is preferred for the opposite situation. Finally, the robustness of the tests to violations of the distributional assumptions is examined.Item Maximum Likelihood Ratio based small-sample tests for random coefficients in linear regression(2003-08-01) Petzold, Max; Jonsson, RobertTwo small-sample tests for random coefficients in linear regression are derived from the Maximum Likelihood Ratio. The first test has previously been proposed for testing equality of fixed effects, but is here shown to be suitable also for random coefficients. The second test is based on the multiple coefficient of determination from regressing the observed subject means on the estimated slopes. The properties and relations of the tests are examined in detail, followed by a simulation study of the power functions. The two tests are found to complement each other depending on the study design: The first test is preferred for a large number of observations from a small number of subjects, and the second test is preferred for the opposite situation. Finally, the robustness of the tests to violations of the distributional assumptions is examined.Item On the problem of optimal inference for time heterogeneous data with error components regression structure(2003) Jonsson, Robert; Department of EconomicsTime heterogeneity, or the fact that subjects are measured at different times, occurs frequently in non-experimental situations. For time heterogeneous data having error components regression structure it is demonstrated that under customary normality assumptions there is no estimation method based on Maximum Likelihood, Least Squares, Within-subject or Between-subject comparisons that is generally superior when estimating the slope of the regression line. However, in some situations it is possible to give guidelines for the choice of an optimal procedure. These are expressed in terms of the variability of the times for the measurements and also of the inter-subject correlation. The results are demonstrated on data from a longitudinal medical study.Item ON THE PROBLEM OF OPTIMAL INFERENCE IN THE SIMPLE ERROR COMPONENT MODEL FOR PANEL DATA(University of Gothenburg, 1991-02-01) Jonsson, RobertFor data consisting of cross sections of units observed over time, the Error Component Regression (ECR) model, with random intercept and constant slope, may sometimes be adequate. While most interest has been focused on pOint estimation of the slope parameter S, little attention has been paid to the problem of making confidence statements and tests about S. In this paper, the performance of some estimators of S and the corresponding test statistics are investigated. In consideration of bias, efficiency and power of tests, it is shown that the Maximum Likelihood estimator with the cqrresponding test statistic is outstanding in large samples. But, in the small sample case there are hardly any reasons for the Maximum Likelihood approach. In the latter case, the use of estimators and test statistics based on within- or between group comparisons is suggested. The results, together with tools for a proper application of the ECR model, are demonstrated on data from a medical follow-up study.Item Relative Efficiency of a Quantile Method for Estimating Parameters in Censored Two-Parameter Weibull Distributions(University of Gothenburg, 2010-11-01) Jonsson, RobertIn simulation studies the computer time can be much reduced by using censoring. Here a simple method based on quantiles (Q method) is compared with the Maximum Likelihood (ML) method when estimating the parameters in censored two-parameter Weibull distributions. The ML estimates being obtained using the SAS procedure NLMIXED. It is demonstrated that the estimators obtained by the Q method are less efficient than the ML estimators, but this can be compensated for by increasing the sample size which nevertheless requires much less computer time than the ML method. The ML estimates can only be obtained by an iterative process and this opens the possibility for failures in the sense that reasonable estimates are presented as unreliable, or anomalous estimates are presented as reliable. Such anomalies were never obtained with the Q method.Item Screening-related prevalence and incidence for non-recurrent diseases(University of Gothenburg, 1997-02-01) Jonsson, RobertExpressions for prevalence (P) and incidence (I) in open dynamic populations are derived. When screenings are performed every s:th year, P and I will be functions of s. It is shown how the true values of P and l, which would have been obtained with continuous screening, can be estimated from screening data. A solution is also given to the following problem: Given that subsets of the population have different P's and I's and that resources are limited so only a fraction of the total population can be screened every year, which should be screened and how often in order to maximize the total proportion of detected cases?Item Simple conservative confidence intervals for comparing matched proportions(University of Gothenburg, 2011-01-01) Jonsson, RobertUnconditional confidence intervals (CIs) for the difference between marginal proportions in matched pairs data have essentially been based on improvements of Wald’s large-sample statistic. The latter are approximate and non-conservative. In some situations it may be of importance that CIs are conservative, e.g. when claiming bio-equivalence in small samples. Existing methods for constructing conservative CIs are computer intensive and are not suitable for sample size determination in planned studies. This paper presents a new simple method by which conservative CIs are readily computed. The method gives CIs that are comparable with earlier conservative methods concerning coverage probabilities and lengths. However, the new method can only be used if the proportions in the discordant cells p and q satisfies , but this is luckily the case in most applications and several examples are given. The new method is compared with previously suggested approximate and exact methods in large-scale simulations.Item Tests of Markov Order and Homogeneity in a Markov Chain(2011) Jonsson, RobertA three-state non-homogeneous Markov chain (MC) of order m≥0, denoted M(m), was previously introduced by the author. The model was used to analyze work resumption among sick-listed patients. It was demonstrated that wrong assumptions about the Markov order m and about homogeneity can seriously invalidate predictions of future health states. In this paper focus is on tests (estimation) of m and of homogeneity. When testing for Markov order it is suggested to test M(m) against M(m+1) with m sequentially chosen as 0, 1, 2,…, until the null hypothesis can’t be rejected. Two test statistics are used, one based on the Maximum Likelihood ratio (MLR) and one based on a chi-square criterion. Also more formal test strategies based on Akaike’s and Baye’s information criteria are considered. Tests of homogeneity are based on MLR statistics. The performance of the tests is evaluated in simulation studies. The tests are applied to rehabilitation data where it is concluded that the rehabilitation process develops according to a non-homogeneous Markov chain of order 2, possibly changing to a homogeneous chain of order 1 towards the end of the period.Item When does Heckman’s two-step procedure for censored data work and when does it not?(2008-02-22T12:26:56Z) Jonsson, RobertHeckman’s two-step procedure (Heckit) for estimating the parameters in linear models from censored data is frequently used by econometricians, despite of the fact that earlier studies cast doubt on the procedure. In this paper it is shown that estimates of the hazard h for approaching the censoring limit, the latter being used as an explanatory variable in the second step of the Heckit, can induce multicollinearity. The influence of the censoring proportion and sample size upon bias and variance in three types of random linear models are studied by simulations. From these results a simple relation is established that describes how absolute bias depends on the censoring proportion and the sample size. It is also shown that the Heckit may work with non-normal (Laplace) distributions, but it collapses if h deviates too much from that of the normal distribution. Data from a study of work resumption after sick-listing are used to demonstrate that the Heckit can be very risky.