Department of Mathematical Sciences / Institutionen för matematiska vetenskaper
Permanent URI for this communityhttps://gupea-staging.ub.gu.se/handle/2077/17606
Browse
Browsing Department of Mathematical Sciences / Institutionen för matematiska vetenskaper by Issue Date
Now showing 1 - 20 of 43
- Results Per Page
- Sort Options
Item Index theory in geometry and physics(2011-04-06) Goffeng, MagnusThis thesis contains three papers in the area of index theory and its applications in geometry and mathematical physics. These papers deal with the problems of calculating the charge deficiency on the Landau levels and that of finding explicit analytic formulas for mapping degrees of Hölder continuous mappings. The first paper deals with charge deficiencies on the Landau levels for non-interacting particles in R^2 under a constant magnetic field, or equivalently, one particle moving in a constant magnetic field in even-dimensional Euclidian space. The K-homology class that the charge of a Landau level defines is calculated in two steps. The first step is to show that the charge deficiencies are the same on every particular Landau level. The second step is to show that the lowest Landau level, which is equivalent to the Fock space, defines the same class as the K-homology class on the sphere defined by the Toeplitz operators in the Bergman space of the unit ball. The second and third paper uses regularization of index formulas in cyclic cohomology to produce analytic formulas for the degree of Hölder continuous mappings. In the second paper Toeplitz operators and Henkin-Ramirez kernels are used to find analytic formulas for the degree of a function from the boundary a relatively compact strictly pseudo-convex domain in a Stein manifold to a compact connected oriented manifold. In the third paper analytic formulas for Hölder continuous mappings between general even-dimensional manifolds are produced using a pseudo-differential operator associated with the signature operator.Item Asymptotics and dynamics in first-passage and continuum percolation(2011-09-06) Ahlberg, DanielThis thesis combines the study of asymptotic properties of percolation processes with various dynamical concepts. First-passage percolation is a model for the spatial propagation of a fluid on a discrete structure; the Shape Theorem describes its almost sure convergence towards an asymptotic shape, when considered on the square (or cubic) lattice. Asking how percolation structures are affected by simple dynamics or small perturbations presents a dynamical aspect. Such questions were previously studied for discrete processes; here, sensitivity to noise is studied in continuum percolation. Paper I studies first-passage percolation on certain 1-dimensional graphs. It is found that when identifying a suitable renewal sequence, its asymptotic behaviour is much better understood compared to higher dimensional cases. Several analogues of classical 1-dimensional limit theorems are derived. Paper II is dedicated to the Shape Theorem itself. It is shown that the convergence, apart from holding almost surely and in L^1, also holds completely. In addition, inspired by dynamical percolation and dynamical versions of classical limit theorems, the almost sure convergence is proved to be dynamically stable. Finally, a third generalization of the Shape Theorem shows that the above conclusions also hold for first-passage percolation on certain cone-like subgraphs of the lattice. Paper III proves that percolation crossings in the Poisson Boolean model, also known as the Gilbert disc model, are noise sensitive. The approach taken generalizes a method introduced by Benjamini, Kalai and Schramm. A key ingredient in the argument is an extremal result on arbitrary hypergraphs, which is used to show that almost no information about the critical process is obtained when conditioning on a denser Poisson process.Item Extreme Value Analysis of Huge Datasets: Tail Estimation Methods in High-Throughput Screening and Bioinformatics(2011-10-13) Dmitrii, ZholudThis thesis presents results in Extreme Value Theory with applications to High-Throughput Screening and Bioinformatics. The methods described here, however, are applicable to statistical analysis of huge datasets in general. The main results are covered in four papers. The first paper develops novel methods to handle false rejections in High-Throughput Screening experiments where testing is done at extreme significance levels, with low degrees of freedom, and when the true null distribution may differ from the theoretical one. We introduce efficient and accurate estimators of False Discovery Rate and related quantities, and provide methods of estimation of the true null distribution resulting from data preprocessing, as well as techniques to compare it with the theoretical null distribution. Extreme Value Statistics provides a natural analysis tool: a simple polynomial model for the tail of the distribution of p-values. We exhibit the properties of the estimators of the parameters of the model, and point to model checking tools, both for independent and dependent data. The methods are tried out on two large scale genomic studies and on an fMRI brain scan experiment. The second paper gives a strict mathematical basis for the above methods. We present asymptotic formulas for the distribution tails of probably the most commonly used statistical tests under non-normality, dependence, and non-homogeneity, and derive bounds on the absolute and relative errors of the approximations. In papers three and four we study high-level excursions of the Shepp statistic for the Wiener process and for a Gaussian random walk. The application areas include finance and insurance, and sequence alignment scoring and database searches in Bioinformatics.Item Percolation: Inference and Applications in Hydrology(2011-11-25) Hammar, OscarPercolation theory is a branch of probability theory describing connectedness in a stochastic network. The connectedness of a percolation process is governed by a few, typically one or two, parameters. A central theme in this thesis is to draw inference about the parameters of a percolation process based on information whether particular points are connected or not. Special attention is paid to issues of consistency as the number of points whose connectedness is revealed tends to infinity. A positive result concerns Bayesian consistency for a bond percolation process on the square lattice $\mathbb{L}^2$ - a process obtained by independently removing each edge of $\mathbb{L}^2$ with probability $1-p$. Another result on Bayesian consistency relates to a continuum percolation model which is obtained by placing discs of fixed radii at each point of a Poisson process in the plane, $\mathbb{R}^2$. Another type of results concerns the computation of relevant quantities for the inference related to percolation processes. Convergence of MCMC algorithms for the computation of the posterior, for bond percolation on a subset of $\mathbb{L}^2$, and the continuum percolation, on a subset of $\mathbb{R}^2$, is proved. The issue of convergence of a stochastic version of the EM algorithm for the computation of the maximum likelihood estimate for a bond percolation problem is also considered. Finally, the theory is applied to hydrology. A model of a heterogeneous fracture amenable for a percolation theory analysis is suggested and the fracture's ability to transmit water is related to the fractures median aperture.Item Okounkov bodies and geodesic rays in Kähler geoemtry(2012-05-10) Witt Nyström, DavidThis thesis presents three papers dealing with questions in Kähler geometry. In the first paper we construct a transform, called the Chebyshev transform, which maps continuous hermitian metrics on a big line bundle to convex functions on the associated Okounkov body. We show that this generalizes the classical Legendre transform in convex and toric geometry, and also Chebyshev constants in pluripotential theory. Our main result is that the integral of the difference of two transforms over the Okounkov body is equal to the Monge-Ampère energy of the two metrics. The Monge-Ampère energy, sometimes also called the Aubin-Mabuchi energy or the Aubin-Yau functional, is a well-known functional in Kähler geometry; it is the primitive function to the Monge-Ampère operator. As a special case we get that the weighted transfinite diameter is equal to the mean over the unit simplex of the weighted directional Chebyshev constants. As an application we prove the differentiability of the Monge-Ampère on the ample cone, extending previous work by Berman-Boucksom. In the second paper we associate to a test configuration for a polarized variety a filtration of the section ring of the line bundle. Using the recent work of Boucksom-Chen we get a concave function on the Okounkov body whose law with respect to Lebesgue measure determines the asymptotic distribution of the weights of the test configuration. We show that this is a generalization of a well-known result in toric geometry. In the third paper, starting with the data of a curve of singularity types, we use the Legendre transform to construct weak geodesic rays in the space of positive singular metrics on an ample line bundle $L.$ Using this we associate weak geodesics to suitable filtrations of the algebra of sections of $L$. In particular this works for the natural filtration coming from an algebraic test configuration, and we show how this in the non-trivial case recovers the weak geodesic ray of Phong-Sturm.Item The Dirac Equation: Numerical and Asymptotic Analysis(2012-11-28) Almanasreh, HasanThe thesis consists of three parts, although each part belongs to a specific subject area in mathematics, they are considered as subfields of the perturbation theory. The main objective of the presented work is the study of the Dirac operator; the first part concerns the treatment of the spurious eigenvalues in the computation of the discrete spectrum. The second part considers G-convergence theory for positive definite parts of a family of Dirac operators and general positive definite self-adjoint operators. The third part discusses the convergence of wave operators for some families of Dirac operators and for general self-adjoint operators. In the first and main part, a stable numerical scheme, using finite element and Galerkin-based $hp$-cloud methods, is developed to remove the spurious eigenvalues from the computational solution of the Dirac eigenvalue problem. The scheme is based on applying a Petrov-Galerkin formulation to introduce artificial diffusivity to stabilize the solution. The added diffusion terms are controlled by a stability parameter which is derived for the particular problem. The derivation of the stability parameter is the main part of the scheme, it is obtained for specific basis functions in the finite element method and then generalized for any set of admissible basis functions in the $hp$-cloud method. In the second part, G-convergence theory is applied to positive definite parts of the Dirac operator perturbed by $h$-dependent abstract potentials, where $h$ is a parameter allowed to grow to infinity. After shifting the perturbed Dirac operator so that the point spectrum is positive definite, the spectral measure is used to obtain projected positive definite parts of the operator, in particular the part that is restricted to the point spectrum. Using the general definition of G-convergence, G-limits, as $h$ approaches infinity, are proved for these projected parts under suitable conditions on the perturbations. Moreover, G-convergence theory is also discussed for some positive definite self-adjoint $h$-dependent operators. The purpose of applying G-convergence is to study the asymptotic behavior of the corresponding eigenvalue problems. In this regard, the eigenvalue problems for the considered operators are shown to converge, as $h$ approaches infinity, to the eigenvalue problems of their associated G-limits. In the third part, scattering theory is studied for the Dirac operator and general self-adjoint operators with classes of $h$-dependent perturbations. For the Dirac operator with different power-like decay $h$-dependent potentials, the wave operators exist and are complete. In our study, strong convergence, as $h$ approaches infinity, of these wave operators is proved and their strong limits are characterized for specific potentials. For general self-adjoint operators, the stationary approach of scattering theory is employed to study the existence and convergence of the stationary and time-dependent $h$-dependent wave operators.Item Exercising Mathematical Competence: Practising Representation Theory and Representing Mathematical Practice(2013-04-05) Säfström, Anna IdaThis thesis assembles two papers in mathematics and two papers in mathematics education. In the mathematics part, representation theory is practised. Two Clebsch-Gordan type problems are addressed. The original problem concerns the decomposition of the tensor product of two finite dimensional, irreducible highest way representations of $GL_{\mathbb{C}}(n)$. This problem is known to be equivalent with the characterisation of the eigenvalues of the sum of two Hermitian matrices. In this thesis, the method of moment maps and coadjoint orbits are used to find equivalence between the eigenvalue problem for skew-symmetric matrices and the tensor product decomposition in the case of $SO_{\mathbb{C}}(2k)$. In addition, some irreducible, infinite dimensional, unitary highest weight representations of $\mathfrak{gl}_{\mathbb{C}}(n+1)$ are determined. In the mathematics education part a framework is developed, offering a language and graphical tool for representing the exercising of competence in mathematical practices. The development sets out from another framework, where competence is defined in terms of mastery. Adjustments are made in order to increase the coherence of the framework, to relate the constructs to contemporary research and to enable analysis of the exercising of competence. These modifications result in two orthogonal sets of essential aspects of mathematical competence: five competencies and two aspects. The five competencies reflect different constituents of mathematical practice: representations, procedures, connections, reasoning and communication. The two aspects evince two different modes of the competencies: the productive and the analytic. The operationalisation of the framework gives rise to an analysis guide and a competency graph. The framework is applied to two sets of empirical data. In the first study, young children's exercising of competencies in handling whole numbers is analysed. The results show that the analytical tools are able to explain this mathematical practice from several angles: in relation to a specific concept, in a certain activity and how different representations may pervade procedures and interaction. The second study describes university students' exercising of competencies in a proving activity. The findings reveal that, while reasoning and the analytic aspect are significant in proving, the other competencies and the productive aspect play important roles as well. Combined, the two studies show that the framework have explanatory power for various mathematical practices. In light of this framework, this thesis exercises both aspects of mathematical competence: the productive aspect in representation theory and the analytic aspect in the development of the framework.Item Stochastic Models in Phylogenetic Comparative Methods: Analytical Properties and Parameter Estimation(2013-09-16) Bartoszek, KrzysztofPhylogenetic comparative methods are well established tools for using inter-species variation to analyse phenotypic evolution and adaptation. They are generally hampered, however, by predominantly univariate approaches and failure to include uncertainty and measurement error in the phylogeny as well as the measured traits. This thesis addresses all these three issues. First, by investigating the effects of correlated measurement errors on a phylogenetic regression. Second, by developing a multivariate Ornstein-Uhlenbeck model combined with a maximum-likelihood estimation package in R. This model allows, uniquely, a direct way of testing adaptive coevolution. Third, accounting for the often substantial phylogenetic uncertainty in comparative studies requires an explicit model for the tree. Based on recently developed conditioned branching processes, with Brownian and Ornstein-Uhlenbeck evolution on top, expected species similarities are derived, together with phylogenetic confidence intervals for the optimal trait value. Finally, inspired by these developments, the phylogenetic framework is illustrated by an exploration of questions concerning “time since hybridization”, the distribution of which proves to be asymptotically exponential.Item Tracking mathematical giftedness in an egalitarian context(2013-10-31) Mattsson, LindaIn three different studies upper secondary school head teachers’ characterization and identification of mathematical giftedness was investigated. A survey study (Paper II) explored the conceptions held by 36 randomly selected upper secondary school head teachers in mathematics. An interview study (Paper III) investigated the conceptions held by three purposively selected head teachers working at the longest running gifted programs in mathematics in Swedish upper secondary schools. A third study (Paper IV) looked for creativity, the characteristic head teachers’ most frequently associated with giftedness, in the admission tests used at the cutting-edge programs in mathematics in upper secondary school. As compared to theoretical models, results showed that the head teachers collectively expressed nuanced characterizations of mathematical giftedness and the identification thereof. This was especially demonstrated by the head teachers at the gifted mathematics programs. Still, for individual head teachers, there is a need to further their knowledge about the different abilities contributing to manifestations of mathematical giftedness. This would increase the possibility to identify and develop the mathematical abilities of an even greater number of mathematically promising students. Krutetskii’s (1976) structure of mathematical abilities manifested by capable mathematics students was used as a framework for the content analysis in the first two studies, and Lithner’s (2008) framework for creative and imitative reasoning was used in the third study. In a fourth study (Paper V) the representation of different student groups at five purposively selected gifted programs in upper secondary school was investigated. Findings from this comparative study of demographical factors – gender, geographical origin, and highest education of parents – were complemented by findings from the interview study where the cognitive, as well as personal and social, characteristics of students participating at three gifted programs were expressed. Results from the interview study indicated that students participating at the gifted programs showed signs of mathematical giftedness that are not necessarily connected to schoolhouse giftedness. Both mathematically gifted students who were individualists and reluctant to participate in traditional school mathematics, and those who were hardworking and ambitious, were recognized. Participating students had special needs, such as to approach mathematical tasks in their own way and learn how to communicate mathematics in written solutions, connected to their giftedness that call for special education. The demographical study showed that it is mostly males with highly educated parents who have found their way to gifted programs in mathematics. In sum, results indicate that the head teachers at the gifted programs acknowledge that there are gifted mathematics students with special educational needs at their gifted programs. There is also a call for the development of complementary educational activities to reach a greater number and variety of gifted students.Item A contribution to the design and analysis of phase III clinical trials(2013-11-08) Lisovskaja, VeraClinical trials are an established methodology for evaluation of the effects of a new medical treatment. These trials are usually divided into several phases, namely phase I through IV. The earlier phases (I and II) are relatively small and have a more exploratory nature. The later phase III is confirmatory and aims to demonstrate the efficacy and safety of the new treatment. This phase is the final one before the treatment is marketed, with phase IV consisting of post-marketing studies. Phase III is initiated only if the conductors of the clinical study judge that the evidence from earlier stages indicates clearly that the new treatment is effective. However, several studies performed in recent years show that this assessment is not always correct. Two papers written on the subject point out average attrition rates of around 45\% and 30\%. In other words, it is estimated that only around two thirds of the compounds that enter phase III finish it successfully. This thesis examines some of the possible ways of improving efficiency in phase III clinical trials. The thesis consists of four papers on various topics that touch this subject, these topics being adaptive designs (paper I), number of doses (paper II) and multiplicity correction procedures (papers III and IV). The first paper examines the properties of the so called dual test, which can be applied in adaptive designs with sample size re-estimation. This test serves as a safeguard against unreasonable conclusions that may otherwise arise if an adaptive design is used. However, there is a price of possible power loss as compared to the standard test that is applied in such situations. The dual test is evaluated by considering several scenarios where its use would be natural. In many cases the power loss is minimal or non-existing. The second paper considers the optimal number and placement of doses used in phase III, with the probability of success of the trial used as optimality criterion. One common way of designing phase III trials is to divide the patients into two groups, one group receiving the new drug and another a control. However, as is demonstrated in paper II, this approach will be inferior to a design with two different doses and a control if there is enough uncertainty in the dose-response model prior to the initiation of the trial. The last two papers study possible gain that results from optimization of the multiplicity correction procedure that is applied if more than one hypothesis is tested in the same trial. Two families of such procedures are considered. The first one, examined in paper III, consists of a combination of a weighted Bonferroni test statistic with the principle of closed testing. The second one, examined in paper IV, is based on combining the same principle with a "pooled" test statistic. Paper III demonstrates that optimizing a multiplicity testing procedure can lead to a significant power increase as compared to simpler, non-optimized, procedures. The optimization is performed with respect to expected utility, an approach that originates from decision theory. Paper IV examines the difference between the Bonferroni-based and the pooled-based multiplicity corrections, finding the latter to be superior to the former if the test statistics follow a known multivariate Normal distribution.Item Tight maps, a classification(2014-09-05) Hamlet, OskarThis thesis concerns the classification of tight totally geodesic maps between Hermitian symmetric spaces of noncompact type. In Paper I we classify holomorphic tight maps. We introduce a new criterion for tightness of Hermitian regular subalgebras. Following the classification of holomorphic maps by Ihara and Satake we go through the lists of (H2)-homomorphisms and Hermitian regular subalgebras and determine which are tight. In Paper II we show that there are no nonholomorphic tight maps into classical codomains (except the known ones from the Poincar\'e disc). As the proof relies heavily on composition arguments we investigate in detail when a composition of tight maps is tight. We develop a new criterion for nontightness in terms of how complex representations of Hermitian Lie algebras branches when restricted to certain subalgebras. Using this we prove the result for a few low rank cases which then extends to the full result by composition arguments. The branching method in Paper II fails to encompass exceptional codomains. We treat one exceptional case using weighted Dynkin diagrams and the other by showing that there exists an unexpected decomposition of homomorphisms in Paper III. Together these three papers yield a full classification of tight maps from irreducible domains.Item Probabilistic modeling in sports, finance and weather(2014-10-02) Lennartsson, JanIn this thesis, we build mathematical and statistical models for a wide variety of real world applications. The mathematical models include applications in team sport tactics and optimal portfolio selection, while the statistical modeling concerns weather and specifically precipitation. For the sport application, we define an underlying value function for evaluating team sport situations in a game theoretic set-up. A consequence of the adopted setting is that the concept of game intelligence is concretized and we are able to give optimal strategies in various decision situations. Finally, we analyze specific examples within ice hockey and team handball and show that these optimal strategies are not always applied in practice, indicating sub-optimal player behaviour even by professionals. Regarding the application for finance, we analyze optimal portfolio selection when performance is measured in excess of an externally given benchmark. This approach to measure performance dominates in the financial industry. We assume that the assets follow the Barndorff-Nielsen and Shephard model, and are able to give the optimal value function explicitly in Feynman-Kac form, as well as the optimal portfolio weights. For the weather application, we analyze the pecipitation process over the spatial domain of Sweden. We model the precipitation process with the aim of creating a weather generator; a stochastic number generator of which synthesized data is similar to the observed process in a weakly sense. In Paper [C], the precipitation process is modeled as a point-wise product of a zero-one Markov process, indicating occurrence or the lack of rainfall, and a transformed Gaussian process, giving the intensities. In Paper [D], the process is modeled as a transformed censored latent Gaussian field. Both models accurately capture significant properties of the modeled quantity. In addition, the second model also possesses the substantial feature of accurately replicating the spatial dependence structure.Item A two-stage numerical procedure for an inverse scattering problem(Chalmers University of Technology and University of Gothenburg, 2015) Bondestam Malmberg, John; Department of Mathematical Sciences, Chalmers University of Technology and University of GothenburgIn this thesis we study a numerical procedure for the solution of the inverse problem of reconstructing location, shape and material properties (in particular refractive indices) of scatterers located in a known background medium. The data consist of time-resolved backscattered radar signals from a single source position. This relatively small amount of data and the ill-posed nature of the inversion are the main challenges of the problem. Mathematically, the problem is formulated as a coefficient inverse problem for a system of partial differential equations derived from Maxwell’s equations. The numerical procedure is divided into two stages. In the first stage, a good initial approximation for the unknown coefficient is computed by an approximately globally convergent algorithm. This initial approximation is refined in the second stage, where an adaptive finite element method is employed to minimize a Tikhonov functional. An important tool for the second stage is a posteriori error estimates – estimates in terms of known (computed) quantities – for the difference between the computed coefficient and the true minimizing coefficient. This thesis includes four papers. In the first two, the a posteriori error analysis required for the adaptive finite element method in the second stage is extended from the previously existing indirect error estimators to direct ones. The last two papers concern verification of the two-stage numerical procedure on experimental data. We find that location and material properties of scatterers are obtained already in the first stage, while shapes are significantly improved in the second stage.Item Mathematical Reasoning - In physics and real-life context(2015-05-05) Johansson, HelenaThis thesis is a compilation of four papers in which mathematical reasoning is examined in various contexts, in which mathematics is an integral part. It is known from previous studies that a focus on rote learning and procedural mathematical reasoning hamper students’ learning of mathematics. The aims of this thesis are to explore how mathematical reasoning affects upper secondary students’ possibilities to master the physics curricula, and how real-life contexts in mathematics affect students’ mathematical reasoning. This is done by analysing the mathematical reasoning requirements in Swedish national physics tests; as well as by examining how mathematical reasoning affects students’ success on the tests/tasks. Furthermore, the possible effect of the presence of real-life contexts in Swedish national mathematics tasks on students’ success is explored; as well as if the effect differs when account is taken to mathematical reasoning requirements. The framework that is used for categorising mathematical reasoning, distinguishes between imitative and creative mathematical reasoning, where the latter, in particular, involves reasoning based on intrinsic properties. Data consisted of ten Swedish national physics tests for upper secondary school, with additional student data for eight of the tests; and six Swedish national mathematics tests for upper secondary school, with additional student data. Both qualitative and quantitative methods were used in the analyses. The qualitative analysis consisted of structured comparisons between representative student solutions and the students’ educational history. Furthermore, various descriptive statistics and significance tests were used. The main results are that a majority of the physics tasks require mathematical reasoning, and particularly that creative mathematical reasoning is required to fully master the physics curricula. Moreover, the ability to reason mathematically creatively seems to have a positive effect on students’ success on physics tasks. The results indicate additionally, that there is an advantage of the presence of real-life context in mathematics tasks when creative mathematical reasoning is required. This advantage seems to be particularly notable for students with lower grades.Item Topics in convex and mixed binary linear optimization(2015-05-08) Gustavsson, EmilThis thesis concerns theory, algorithms, and applications for two problem classes within the realm of mathematical optimization; convex optimization and mixed binary linear optimization. To the thesis is appended five papers containing its main contributions. In the first paper a subgradient optimization method is applied to the Lagrangian dual of a general convex and (possibly) nonsmooth optimization problem. The classic dual subgradient method produces primal solutions that are, however, neither optimal nor feasible. Yet, convergence to the set of optimal primal solutions can be obtained by constructing a class of ergodic sequences of the Lagrangian subproblemsolutions.We generalize previous convergence results for such ergodic sequences by proposing a new set of rules for choosing the convexity weights defining the sequences. Numerical results indicate that by applying our new set of rules primal feasible solutions of higher quality than those created by the previously developed rules are achieved. The second paper analyzes the properties of a subgradient method when applied to the Lagrangian dual of an infeasible convex program. The primal-dual pair of programs corresponding to an associated homogeneous dual function is shown to be in turn associated with a saddle-point problem, in which the primal part amounts to finding a solution such that the Euclidean norm of the infeasibility in the relaxed constraints is minimized. Convergence results for a conditional dual subgradient optimization method applied to the Lagrangian dual problem is presented. The sequence of ergodic primal iterates is shown to converge to the set of solutions to the primal part of the associated saddle-point problem. The third paper applies a dual subgradientmethod to a general mixed binary linear program (MBLP). The resulting sequence of primal ergodic iterates is shown to converge to the set of solutions to a convexified version of the original MBLP, and three procedures for utilizing the primal ergodic iterates for constructing feasible solutions to the MBLP are proposed: a Lagrangian heuristic, the construction of a so-called core problem, and a framework for utilizing the ergodic primal iterates within a branch-and-bound algorithm. Numerical results for samples of uncapacitated facility location problems and set covering problems indicate that the proposed procedures are practically useful for solving structured MBLPs. In the fourth paper, the preventive maintenance scheduling problem with interval costs is studied. This problem considers the scheduling of maintenance of the components in a multicomponent system with the objective to minimize the sum of the set-up and interval costs for the system over a finite time period. The problem is shown to be NP-hard, and an MBLP model is introduced and utilized in three case studies from the railway, aircraft, and wind power industries. In the fifth paper an MBLP model for the optimal scheduling of tamping operations on ballasted rail tracks is introduced. The objective is to minimize the total maintenance costs while maintaining an acceptable condition on the ballasted tracks. The model is thoroughly analyzed and the scheduling problem considered is shown to be NP-hard. A computational study shows that the total cost for maintenance can be reduced by up to 10% as compared with the best policy investigated.Item Topics on Harmonic analysis and Multilinear Algebra(2015-09-23) Hormozi, MahdiThe present thesis consists of six different papers. Indeed, they treat three different research areas: function spaces, singular integrals and multilinear algebra. In paper I, a characterization of continuity of the $p$-$\Lambda$-variation function is given and Helly's selection principle for $\Lambda BV^{(p)}$ functions is established. A characterization of the inclusion of Waterman-Shiba classes into classes of functions with given integral modulus of continuity is given. A useful estimate on the modulus of variation of functions of class $\Lambda BV^{(p)}$ is found. In paper II, a characterization of the inclusion of Waterman-Shiba classes into $H_{\omega}^{q}$ is given. This corrects and extends an earlier result of a paper from 2005. In paper III, the characterization of the inclusion of Waterman-Shiba spaces $\:\Lambda BV^{(p)}\:$ into generalized Wiener classes of functions $BV(q;\,\delta)$ is given. It uses a new and shorter proof and extends an earlier result of U. Goginava. In paper IV, we discuss the existence of an orthogonal basis consisting of decomposable vectors for all symmetry classes of tensors associated with Semi-dihedral groups $SD_{8n}$. In paper V, we discuss o-bases of symmetry classes of tensors associated with the irreducible Brauer characters of the Dicyclic and Semi-dihedral groups. As in the case of Dihedral groups [46], it is possible that $V_\phi(G)$ has no o-basis when $\phi$ is a linear Brauer character. Let $\vec{P}=(p_1,\dotsc,p_m)$ with $1Item Resampling in network modeling of high-dimensional genomic data(University of Gothenburg and Chalmers University of Technology, 2017) Kallus, Jonatan; Department of Mathematical SciencesNetwork modeling is an effective approach for the interpretation of high-dimensional data sets for which a sparse dependence structure can be assumed. Genomic data is a challenging and important example. In genomics, network modeling aids the discovery of biological mechanistic relationships and therapeutic targets. The usefulness of methods for network modeling is improved when they produce networks that are accompanied by a reliability estimate. Furthermore, for methods to produce reliable networks they need to have a low sensitivity to occasional outlier observations. In this thesis, the problem of robust network modeling with error control in terms of the false discovery rate (FDR) of edges is studied. As a background, existing types of genomic data are described and the challenges of high-dimensional statistics and multiple hypothesis testing are explained. Methods for estimation of sparse dependency structures in single samples of genomic data are reviewed. Such methods have a regularization parameter that controls sparsity of estimates. Methods that are based on a single sample are highly sensitive to outlier observations and to the value of the regularization parameter. We introduce the method ROPE, resampling of penalized estimates, that makes robust network estimates by using many data subsamples and several levels of regularization. ROPE controls edge FDR at a specified level by modeling edge selection counts as coming from an overdispersed beta-binomial mixture distribution. Previously existing resampling based methods for network modeling are reviewed. ROPE was evaluated on simulated data and gene expression data from cancer patients. The evaluation shows that ROPE outperforms state-of-the-art methods in terms of accuracy of FDR control and robustness. Robust FDR control makes it possible to make a principled decision of how many network links to use in subsequent analysis steps.Item Matematisk och pedagogisk kunskap – Lärarstudenters uppfattningar av begreppen funktion och variabel(2017) Borke, Mikael; Matematiska Vetenskaper Chalmers Tekniska Högskola och Göteborgs UniversitetItem Statistical analysis and modelling of gene count data in metagenomics(2017-01-26) Viktor, JonssonMicroorganisms form complex communities that play an integral part of all ecosystems on Earth. Metagenomics enables the study of microbial communities through sequencing of random DNA fragments from the collective genome of all present organisms. Metagenomic data is discrete, high-dimensional and contains excessive levels of both biological and technical variability, which makes the statistical analysis challenging. This thesis aims to improve the statistical analysis of metagenomic data in two ways; by characterising the variance structure present in metagenomic data, and by developing and evaluating methods for identification of differentially abundant genes between experimental conditions. In Paper I we evaluate and compare the statistical performance of 14 methods previously used for metagenomic data. In Paper II we implement an overdispersed Poisson model and use it to show that the biological variability varies considerably between genes. The model is used to evaluate a range of assumptions for the variance parameter, and we show that correct modelling of the variance is vital for reducing the number of false positives. In Paper III we extend the model used in Paper II to incorporate zero-inflation. Using the extended model, we show that metagenomic data does indeed contain substantial levels of zero-inflation. We demonstrate that the new model has a high power to detect differentially abundant genes. In Paper IV we suggest improvements to the annotation and quantification of gene content in metagenomic data. Our proposed method, HirBin, uses a data-centric approach to identify effects at a finer resolution, which in turn allows for more accurate biological conclusions. This thesis highlights the importance of statistical modelling and the use of appropriate assumptions in the analysis of metagenomic data. The presented results may also guides researchers to select and further refine statistical tools for reliable analysis of metagenomic data.Item Efficient Adaptive Algorithms for an Electromagnetic Coefficient Inverse Problem(2017-06-08) Malmberg, John BondestamThis thesis comprises five scientific papers, all of which are focusing on the inverse problem of reconstructing a dielectric permittivity which may vary in space inside a given domain. The data for the reconstruction consist of time-domain observations of the electric field, resulting from a single incident wave, on a part of the boundary of the domain under consideration. The medium is assumed to be isotropic, non-magnetic, and non-conductive. We model the permittivity as a continuous function, and identify distinct objects by means of iso-surfaces at threshold values of the permittivity. Our reconstruction method is centred around the minimization of a Tikhonov functional, well known from the theory of ill-posed problems, where the minimization is performed in a Lagrangian framework inspired by optimal control theory for partial differential equations. Initial approximations for the regularization and minimization are obtained either by a so-called approximately globally convergent method, or by a (simpler but less rigorous) homogeneous background guess. The functions involved in the minimization are approximated with finite elements, or with a domain decomposition method with finite elements and finite differences. The computational meshes are refined adaptively with regard to the accuracy of the reconstructed permittivity, by means of an a posteriori error estimate derived in detail in the fourth paper. The method is tested with success on simulated as well as laboratory measured data.
- «
- 1 (current)
- 2
- 3
- »