Doctoral Theses / Doktorsavhandlingar Institutionen för matematiska vetenskaper
Permanent URI for this collectionhttps://gupea-staging.ub.gu.se/handle/2077/17607
Browse
Browsing Doctoral Theses / Doktorsavhandlingar Institutionen för matematiska vetenskaper by Title
Now showing 1 - 20 of 37
- Results Per Page
- Sort Options
Item A contribution to the design and analysis of phase III clinical trials(2013-11-08) Lisovskaja, VeraClinical trials are an established methodology for evaluation of the effects of a new medical treatment. These trials are usually divided into several phases, namely phase I through IV. The earlier phases (I and II) are relatively small and have a more exploratory nature. The later phase III is confirmatory and aims to demonstrate the efficacy and safety of the new treatment. This phase is the final one before the treatment is marketed, with phase IV consisting of post-marketing studies. Phase III is initiated only if the conductors of the clinical study judge that the evidence from earlier stages indicates clearly that the new treatment is effective. However, several studies performed in recent years show that this assessment is not always correct. Two papers written on the subject point out average attrition rates of around 45\% and 30\%. In other words, it is estimated that only around two thirds of the compounds that enter phase III finish it successfully. This thesis examines some of the possible ways of improving efficiency in phase III clinical trials. The thesis consists of four papers on various topics that touch this subject, these topics being adaptive designs (paper I), number of doses (paper II) and multiplicity correction procedures (papers III and IV). The first paper examines the properties of the so called dual test, which can be applied in adaptive designs with sample size re-estimation. This test serves as a safeguard against unreasonable conclusions that may otherwise arise if an adaptive design is used. However, there is a price of possible power loss as compared to the standard test that is applied in such situations. The dual test is evaluated by considering several scenarios where its use would be natural. In many cases the power loss is minimal or non-existing. The second paper considers the optimal number and placement of doses used in phase III, with the probability of success of the trial used as optimality criterion. One common way of designing phase III trials is to divide the patients into two groups, one group receiving the new drug and another a control. However, as is demonstrated in paper II, this approach will be inferior to a design with two different doses and a control if there is enough uncertainty in the dose-response model prior to the initiation of the trial. The last two papers study possible gain that results from optimization of the multiplicity correction procedure that is applied if more than one hypothesis is tested in the same trial. Two families of such procedures are considered. The first one, examined in paper III, consists of a combination of a weighted Bonferroni test statistic with the principle of closed testing. The second one, examined in paper IV, is based on combining the same principle with a "pooled" test statistic. Paper III demonstrates that optimizing a multiplicity testing procedure can lead to a significant power increase as compared to simpler, non-optimized, procedures. The optimization is performed with respect to expected utility, an approach that originates from decision theory. Paper IV examines the difference between the Bonferroni-based and the pooled-based multiplicity corrections, finding the latter to be superior to the former if the test statistics follow a known multivariate Normal distribution.Item Asymptotics and dynamics in first-passage and continuum percolation(2011-09-06) Ahlberg, DanielThis thesis combines the study of asymptotic properties of percolation processes with various dynamical concepts. First-passage percolation is a model for the spatial propagation of a fluid on a discrete structure; the Shape Theorem describes its almost sure convergence towards an asymptotic shape, when considered on the square (or cubic) lattice. Asking how percolation structures are affected by simple dynamics or small perturbations presents a dynamical aspect. Such questions were previously studied for discrete processes; here, sensitivity to noise is studied in continuum percolation. Paper I studies first-passage percolation on certain 1-dimensional graphs. It is found that when identifying a suitable renewal sequence, its asymptotic behaviour is much better understood compared to higher dimensional cases. Several analogues of classical 1-dimensional limit theorems are derived. Paper II is dedicated to the Shape Theorem itself. It is shown that the convergence, apart from holding almost surely and in L^1, also holds completely. In addition, inspired by dynamical percolation and dynamical versions of classical limit theorems, the almost sure convergence is proved to be dynamically stable. Finally, a third generalization of the Shape Theorem shows that the above conclusions also hold for first-passage percolation on certain cone-like subgraphs of the lattice. Paper III proves that percolation crossings in the Poisson Boolean model, also known as the Gilbert disc model, are noise sensitive. The approach taken generalizes a method introduced by Benjamini, Kalai and Schramm. A key ingredient in the argument is an extremal result on arbitrary hypergraphs, which is used to show that almost no information about the critical process is obtained when conditioning on a denser Poisson process.Item Combinatorics of solvable lattice models with a reflecting end(2021-04-21) Hietala, LinneaI den här avhandlingen studerar vi några exakt lösbara, kvantintegrerbara gittermodeller. Izergin bevisade en determinantformel för partitionsfunktionen till sexvertexmodellen på ett gitter av storlek n × n med Korepins domänväggrandvillkor (domain wall boundary conditions – DWBC). Metoden har blivit ett användbart verktyg för att studera partitionsfunktionen för liknande modeller. Determinantformeln har också visat sig vara användbar för att lösa andra typer av problem. Genom att specialisera parametrarna i Izergins determinantformel kunde Kuperberg hitta en formel för antalet alternerande tecken-matriser (alternating sign matrices – ASMs). Bazhanov och Mangazeev introducerade speciella polynom, bland annat pn och qn, som kan användas för att uttrycka speciella komponenter av egenvektorerna för grundtillstånden till den supersymmetriska XYZ-spinnkedjan av udda längd. I artikel I hittar vi explicita kombinatoriska uttryck för polynomen qn i termer av trefärgsmodellen med DWBC och en (diagonal) reflekterande rand. Sambandet uppstår genom att specialisera parametrarna i partitionsfunktionen för den elliptiska sexvertexmodellen med DWBC och en (diagonal) reflekterande rand på Kuperbergs sätt. Som en följd av detta kan vi hitta resultat för trefärgsmodellen, exempelvis antalet tillstånd med ett givet antal rutor av varje färg. I artikel II studerar vi polynomen pn på ett liknande sätt. Kopplingen till den elliptiska sexvertexmodellen fås genom att specialisera alla parametrar utom en på Kuperbergs sätt. Genom att använda Izergin–Korepin-metoden i Artikel III hittar vi en determinantformel för partitionsfunktionen för den trigonometriska sexvertexmodellen med DWBC och en partiellt (triangulär) reflekterande rand på ett gitter av storlek 2n×m, där m ≤ n. Sedan använder vi Kuperbergs specialisering av parametrarna för att hitta ett explicit uttryck för antalet tillstånd av modellen som en determinant av Wilson-polynom. Vi kopplar också detta till en sorts ASM-liknande matriser. // In this thesis, we study some exactly solvable, quantum integrable lattice models. Izergin proved a determinant formula for the partition function of the six-vertex (6V) model on an n×n lattice with the domain wall boundary conditions (DWBC) of Korepin. The method has become a useful tool to study the partition functions of similar models. The determinant formula has also proved useful for seemingly unrelated questions. In particular, by specializing the parameters in Izergin’s determinant formula, Kuperberg was able to give a formula for the number of alternating sign matrices (ASMs). Bazhanov and Mangazeev introduced special polynomials, including pn and qn, that can be used to express certain ground state eigenvector components for the supersymmetric XYZ spin chain of odd length. In Paper I, we find explicit combinatorial expressions for the polynomials qn in terms of the three-color model with DWBC and a (diagonal) reflecting end. The connection emerges by specializing the parameters in the partition function of the eight-vertex solid-on-solid (8VSOS) model with DWBC and a (diagonal) reflecting end in Kuperberg’s way. As a consequence, we find results for the three-color model, including the number of states with a given number of faces of each color. In Paper II, we perform a similar study of the polynomials pn. To get the connection to the 8VSOS model, we specialize all parameters except one in Kuperberg’s way. By using the Izergin–Korepin method in Paper III, we find a determinant formula for the partition function of the trigonometric 6V model with DWBC and a partially (triangular) reflecting end on a 2n × m lattice, m ≤ n. Thereafter we use Kuperberg’s specialization of the parameters to find an explicit expression for the number of states of the model as a determinant of Wilson polynomials. We relate this to a type of ASM-like matrices.Item Differential forms and currents on non-reduced complex spaces with applications to divergent integrals and the dbar-equation(2020-12-16) Lennartsson, MattiasItem Efficient Adaptive Algorithms for an Electromagnetic Coefficient Inverse Problem(2017-06-08) Malmberg, John BondestamThis thesis comprises five scientific papers, all of which are focusing on the inverse problem of reconstructing a dielectric permittivity which may vary in space inside a given domain. The data for the reconstruction consist of time-domain observations of the electric field, resulting from a single incident wave, on a part of the boundary of the domain under consideration. The medium is assumed to be isotropic, non-magnetic, and non-conductive. We model the permittivity as a continuous function, and identify distinct objects by means of iso-surfaces at threshold values of the permittivity. Our reconstruction method is centred around the minimization of a Tikhonov functional, well known from the theory of ill-posed problems, where the minimization is performed in a Lagrangian framework inspired by optimal control theory for partial differential equations. Initial approximations for the regularization and minimization are obtained either by a so-called approximately globally convergent method, or by a (simpler but less rigorous) homogeneous background guess. The functions involved in the minimization are approximated with finite elements, or with a domain decomposition method with finite elements and finite differences. The computational meshes are refined adaptively with regard to the accuracy of the reconstructed permittivity, by means of an a posteriori error estimate derived in detail in the fourth paper. The method is tested with success on simulated as well as laboratory measured data.Item Efficient training of interpretable, non-linear regression models(2023-06-30) Allerbo, OskarRegression, the process of estimating functions from data, comes in many flavors. One of the most commonly used regression models is linear regression, which is computationally efficient and easy to interpret, but lacks in flexibility. Non-linear regression methods, such as kernel regression and artificial neural networks, tend to be much more flexible, but also harder to interpret and more difficult, and computationally heavy, to train. In the five papers of this thesis, different techniques for constructing regression models that combine flexibility with interpretability and computational efficiency, are investigated. In Papers I and II, sparsely regularized neural networks are used to obtain flexible, yet interpretable, models for additive modeling (Paper I) and dimensionality reduction (Paper II). Sparse regression, in the form of the elastic net, is also covered in Paper III, where the focus is on increased computational efficiency by replacing explicit regularization with iterative optimization and early stopping. In Paper IV, inspired by Jacobian regularization, we propose a computationally efficient method for bandwidth selection for kernel regression with the Gaussian kernel. Kernel regression is also the topic of Paper V, where we revisit efficient regularization through early stopping, by solving kernel regression iteratively. Using an iterative algorithm for kernel regression also enables changing the kernel during training, which we use to obtain a more flexible method, resembling the behavior of neural networks. In all five papers, the results are obtained by carefully selecting either the regularization strength or the bandwidth. Thus, in summary, this work contributes with new statistical methods for combining flexibility with interpretability and computational efficiency based on intelligent hyperparameter selection.Item Enabling mechanistic understanding of cellular dynamics through mathematical modelling and development of efficient methods(2024-11-13) Persson, SebastianCell biology is complex, but unravelling this complexity is important. For example, the recent COVID-19 pandemic highlighted the need to understand how cells function in order to develop efficient vaccines and treatments. However, studying cellular systems is challenging because they are often highly interconnected, dynamic and contain many redundant components. Mathematical modelling provides a powerful framework to reason about such complexity. In the four papers underlying this thesis, our aim was twofold.The first was to unravel mechanisms that regulate cellular dynamic behaviour in the model organism Saccharomyces cerevisiae. In particular, by developing single-cell dynamic models, we investigated how cells respond to changes in nutrient levels. We identified mechanisms behind the reaction dynamics and uncovered sources of cell-to-cell variability. Additionally, by developing reaction-diffusion modelling, we studied the size regulation of self-assembled structures and demonstrated how the interplay of feedback mechanisms can regulate structure size. Our second aim was to develop methods and software to facilitate efficient modelling. Modelling often involves fitting models to data to verify specific hypotheses, and it is beneficial if models inconsistent with the data can be discarded rapidly. To this end, we developed software for working with single-cell dynamic models that, in contrast to previous methods, imposes fewer restrictions on how cell-to-cell variability is modelled. Moreover, we developed and evaluated software for fitting population-average dynamic models to data. This software outperforms the current state of the art, and to make it accessible, we released it as two well-documented open-source packages. Taken together, this thesis sheds light on fundamental regulatory mechanisms and introduces software for efficient modelling.Item Exercising Mathematical Competence: Practising Representation Theory and Representing Mathematical Practice(2013-04-05) Säfström, Anna IdaThis thesis assembles two papers in mathematics and two papers in mathematics education. In the mathematics part, representation theory is practised. Two Clebsch-Gordan type problems are addressed. The original problem concerns the decomposition of the tensor product of two finite dimensional, irreducible highest way representations of $GL_{\mathbb{C}}(n)$. This problem is known to be equivalent with the characterisation of the eigenvalues of the sum of two Hermitian matrices. In this thesis, the method of moment maps and coadjoint orbits are used to find equivalence between the eigenvalue problem for skew-symmetric matrices and the tensor product decomposition in the case of $SO_{\mathbb{C}}(2k)$. In addition, some irreducible, infinite dimensional, unitary highest weight representations of $\mathfrak{gl}_{\mathbb{C}}(n+1)$ are determined. In the mathematics education part a framework is developed, offering a language and graphical tool for representing the exercising of competence in mathematical practices. The development sets out from another framework, where competence is defined in terms of mastery. Adjustments are made in order to increase the coherence of the framework, to relate the constructs to contemporary research and to enable analysis of the exercising of competence. These modifications result in two orthogonal sets of essential aspects of mathematical competence: five competencies and two aspects. The five competencies reflect different constituents of mathematical practice: representations, procedures, connections, reasoning and communication. The two aspects evince two different modes of the competencies: the productive and the analytic. The operationalisation of the framework gives rise to an analysis guide and a competency graph. The framework is applied to two sets of empirical data. In the first study, young children's exercising of competencies in handling whole numbers is analysed. The results show that the analytical tools are able to explain this mathematical practice from several angles: in relation to a specific concept, in a certain activity and how different representations may pervade procedures and interaction. The second study describes university students' exercising of competencies in a proving activity. The findings reveal that, while reasoning and the analytic aspect are significant in proving, the other competencies and the productive aspect play important roles as well. Combined, the two studies show that the framework have explanatory power for various mathematical practices. In light of this framework, this thesis exercises both aspects of mathematical competence: the productive aspect in representation theory and the analytic aspect in the development of the framework.Item Extreme rainfall modelling under climate change and proper scoring rules for extremes and inference(2024-09-06) Ólafsdóttir, Helga KristínModel development, model inference and model evaluation are three important cornerstones of statistical analysis. This thesis touches on all these through modelling extremes under climate change and evaluating extreme models using scoring rules, and by using scoring rules for statistical inference on spatial models. The findings are presented in three papers. In Paper I, a new statistical model is developed, that uses the connections between the generalised extreme value distribution and the generalised Pareto distribution to capture frequency changes in annual maxima. This allows using high-quality annual maxima data instead of less-well checked daily data to separately estimate trends in frequency and intensity. The model was applied to annual maximum data of Volume 10 of NOAA Atlas 14, showing that in the Northeastern US there are evidence that extreme rainfall events are occurring more often with rising temperature, but that there is little evidence that there are trends in the distribution of sizes of individual extreme rainfall events. Paper II introduces the concept of local weight-scale invariance which is a relaxation of local scale invariance for proper scoring rules. This relaxation is suitable for weighted scores that are for example useful when comparing extreme models. A weight-scale invariant version of the tail-weighted continuous ranked probability score is introduced and the properties of the different weighted scores were investigated. Finally, Paper III continues on the path of scoring rules, but instead uses scoring rules for statistical inference of spatial models. The proposed approach estimates parameters of spatial models by maximising the average leave-one-out cross-validation score (LOOS). The method results in fast computations for Gaussian models with sparse precision matrices and allows tailoring estimator's robustness to outliers and their sensitivity to spatial variations of uncertainty through the choice of the scoring rule which is used in the maximisation.Item Extreme Value Analysis of Huge Datasets: Tail Estimation Methods in High-Throughput Screening and Bioinformatics(2011-10-13) Dmitrii, ZholudThis thesis presents results in Extreme Value Theory with applications to High-Throughput Screening and Bioinformatics. The methods described here, however, are applicable to statistical analysis of huge datasets in general. The main results are covered in four papers. The first paper develops novel methods to handle false rejections in High-Throughput Screening experiments where testing is done at extreme significance levels, with low degrees of freedom, and when the true null distribution may differ from the theoretical one. We introduce efficient and accurate estimators of False Discovery Rate and related quantities, and provide methods of estimation of the true null distribution resulting from data preprocessing, as well as techniques to compare it with the theoretical null distribution. Extreme Value Statistics provides a natural analysis tool: a simple polynomial model for the tail of the distribution of p-values. We exhibit the properties of the estimators of the parameters of the model, and point to model checking tools, both for independent and dependent data. The methods are tried out on two large scale genomic studies and on an fMRI brain scan experiment. The second paper gives a strict mathematical basis for the above methods. We present asymptotic formulas for the distribution tails of probably the most commonly used statistical tests under non-normality, dependence, and non-homogeneity, and derive bounds on the absolute and relative errors of the approximations. In papers three and four we study high-level excursions of the Shepp statistic for the Wiener process and for a Gaussian random walk. The application areas include finance and insurance, and sequence alignment scoring and database searches in Bioinformatics.Item Geometrical and percolative properties of spatially correlated models(2020-03-10) Hallqvist Elias, Karl OlofThis thesis consists of four papers dealing with phase transitions in various models of continuum percolation. These models exhibit complicated dependencies and are generated by different Poisson processes. For each such process there is a parameter, known as the intensity, governing its behavior. By varying the value of this parameter, the geometrical and topological properties of these models may undergo dramatic and rapid changes. This phenomenon is called a phase transition and the value at which the change occur is called a critical value. In Paper I, we study the topic of visibility in the vacant set of the Brownian interlacements in Euclidean space and the Brownian excursions process in the unit disc. For the vacant set of the Brownian interlacements we obtain upper and lower bounds of the probability of having visibility in some direction to a distance r in terms of the probability of having visibility in a fixed direction of distance r. For the vacant set of the Brownian excursions we prove a phase transition in terms of visibility to infinity (with respect to the hyperbolic metric). We also determine the critical value and show that at the critical value there is no visibility to infinity. In Paper II we compute the critical value for percolation in the vacant set of the Brownian excursions process. We also show that the Brownian excursions process is a hyperbolic analogue of the Brownian interlacements. In Paper III, we study the vacant set of a semi scale invariant version of the Poisson cylinder model. In this model it turns out that the vacant set is a fractal. We determine the critical value for the so-called existence phase transition and what happens at the critical value. We also compute the Hausdorff dimension of the fractal whenever it exists. Furthermore, we prove that the fractal exhibits a nontrivial connectivity phase transition for dimensions four and greater and that the fractal is totally disconnected for dimension two. In the three dimensional case we prove a partial result showing that the fractal restricted to a plane is totally disconnected with probability one. In Paper IV we study a continuum percolation process, the random ellipsoid model, generated by taking the intersection of a Poisson cylinder model in d dimensions and a subspace of dimension k. For k between 2 and d-2, we show that there is a non-trivial phase transition concerning the expected number of ellipsoids in the cluster of the origin. When k=d-1 this critical value is zero. We compare these results with results for the classical Poisson Boolean model.Item Global residue currents and the Ext functors(2022-09-09) Johansson, JimmyThis thesis concerns developments in multivariable residue theory. In particular we consider global constructions of residue currents related to work by Andersson and Wulcan. In the first paper of this thesis, we consider global residue currents defined on projective space, and we show that these currents provide a tool for studying polynomial interpolation. Polynomial interpolation is related to local cohomology, and by a result known as local duality, there is a close connection with certain Ext groups. The second paper of this thesis is devoted to further study of connections between residue currents and the Ext functors. The main result is that we construct a global residue current on a complex manifold, and using this we give an explicit formula for an isomorphism of two different representations of the global Ext groups on complex manifolds.Item Hodge Theory in Combinatorics and Mirror Symmetry(2024-10-22) Pochekai, MykolaHodge theory, in its broadest sense, encompasses the study of the decomposition of cohomology groups of complex manifolds, as well as related fields such as periods, motives, and algebraic cycles. In this thesis, ideas from Hodge theory have been incorporated into two seemingly unrelated projects, namely mathematical mirror symmetry and combinatorics. Papers I-II explore an instance of genus one mirror symmetry for the complete intersection of two cubics in five-dimensional projective space. The mirror family for this complete intersection is constructed, and it is demonstrated that the BCOV-invariant of the mirror family is related to the genus one Gromov-Witten invariants of the complete intersection of two cubic. This proves new cases of genus one mirror symmetry. Paper III defines Hodge-theoretic structures on triangulations of a special type. It is shown that if a polytope admits a regular, unimodular triangulation with a particular additional property, its $\delta$-vector from Ehrhart theory is unimodal.Item Hyperuniformity and Hyperfluctuations for Random Measures on Euclidean and Non-Euclidean spaces(2025-05-05) Byléhn, MattiasIn this thesis we study fluctuations of generic random point configurations in Euclidean and symmetric curved geometries. Mathematically, such configurations are interpreted as isometrically invariant point processes, and fluctuations are recorded by the variance of the number of points in a centered ball, the number variance. Hyperuniformity and hyperfluctuation of such configurations in the sense of Stillinger-Torquato is characterized in terms of large-scale asymptotics of the number variance in relation to that of an ideal gas, and equivalently by small-scale asymptotics of the Bartlett spectral measure in the diffraction picture. Appended to the Thesis are three papers: In Paper I we provide lower asymptotic bounds for number variances of isometrically invariant random measures in Euclidean and hyperbolic spaces, generalizing a result by Beck. In particular, we find that geometric hyperuniformity fails for every isometrically invariant random measure on hyperbolic space. In contrast to this, we define a notion of spectral hyperuniformity which is satisfied by certain invariant random lattice configura- tions. In Paper II we establish similar lower asymptotic bounds for number variances of automorphism invariant point processes in regular trees. The main result is that these lower bounds are not uniform for the invariant random lattice configurations defined by the fundamental groups of complete regular graphs and the Petersen graph. We also provide a criterion for when these lower bounds are uniform in terms of certain rational peaks appearing in the diffraction picture. In Paper III we prove the existence and uniqueness of Bartlett spectral measures for invariant random measures on a large class of non-compact commutative spaces, which includes those in Papers I and II. For higher rank symmetric spaces governed by simple Lie groups, we prove that there is a power strictly less than 2 of the volume of balls that asymptotically bounds the number variance of any invariant random measure from above. Moreover, we derive Bartlett spectral measures for invariant determinantal point processes on commutative spaces and define a notion of heat kernel hyperuniformity on Euclidean and hyperbolic spaces that is equivalent to spectral hyperuniformity.Item Index theory in geometry and physics(2011-04-06) Goffeng, MagnusThis thesis contains three papers in the area of index theory and its applications in geometry and mathematical physics. These papers deal with the problems of calculating the charge deficiency on the Landau levels and that of finding explicit analytic formulas for mapping degrees of Hölder continuous mappings. The first paper deals with charge deficiencies on the Landau levels for non-interacting particles in R^2 under a constant magnetic field, or equivalently, one particle moving in a constant magnetic field in even-dimensional Euclidian space. The K-homology class that the charge of a Landau level defines is calculated in two steps. The first step is to show that the charge deficiencies are the same on every particular Landau level. The second step is to show that the lowest Landau level, which is equivalent to the Fock space, defines the same class as the K-homology class on the sphere defined by the Toeplitz operators in the Bergman space of the unit ball. The second and third paper uses regularization of index formulas in cyclic cohomology to produce analytic formulas for the degree of Hölder continuous mappings. In the second paper Toeplitz operators and Henkin-Ramirez kernels are used to find analytic formulas for the degree of a function from the boundary a relatively compact strictly pseudo-convex domain in a Stein manifold to a compact connected oriented manifold. In the third paper analytic formulas for Hölder continuous mappings between general even-dimensional manifolds are produced using a pseudo-differential operator associated with the signature operator.Item Learning to solve problems that you have not learned to solve: Strategies in mathematical problem solving(2019-08-16) Fülöp, ÉvaThis thesis aims to contribute to a deeper understanding of the relationship between problem-solving strategies and success in mathematical problem solving. In its introductory part, it pursues and describes the term strategy in mathematics and discusses its relationship to the method and algorithm concepts. Through these concepts, we identify three decision-making levels in the problem-solving process. The first two parts of this thesis are two different studies analysing how students’ problem-solving ability is affected by learning of problem-solving strategies in mathematics. We investigated the effects of variation theory-based instructional design in teaching problem-solving strategies within a regular classroom. This was done by analysing a pre- and a post-test to compare the development of an experimental group’s and a control group’s knowledge of mathematics in general and problem-solving ability in particular. The analysis of the test results show that these designed activities improve students’ problem-solving ability without compromising their progress in mathematics in general. The third study in this thesis aims to give a better understanding of the role and use of strategies in the mathematical problem-solving processes. By analysing 79 upper secondary school students’ written solutions, we were able to identify decisions made at all three levels and how knowledge in these levels affected students’ problem-solving successes. The results show that students who could view the problem as a whole while keeping the sub-problems in mind simultaneously had the best chances of succeeding. In summary, we have in the appended papers shown that teaching problem-solving strategies could be integrated in the mathematics teaching practice to improve students mathematical problem-solving abilities.Item Limit Theorems for Lattices and L-functionsHolm, KristianThis PhD thesis investigates distributional questions related to three types of objects: Unimodular lattices, symplectic lattices, and Hecke L-functions of imaginary quadratic number fields of class number 1. In Paper I, we follow Södergren and examine the asymptotic joint distribution of a collection of random variables arising as geometric attributes of the N = N(n) shortest non-zero lattice vectors (up to sign) in a random unimodular lattice in n-dimensional Euclidean space, as the dimension n tends to infinity: Normalizations of the lengths of these vectors, and normalizations of the angles between them. We prove that under suitable conditions on N, this collection of random variables is asymptotically distributed like the first N arrival times of a Poisson process of intensity 1/2 and a collection of positive standard Gaußians. This generalizes previous work of Södergren. In Paper II, we use methods developed by Björklund and Gorodnik to study the error term in a classical lattice point counting asymptotic due to Schmidt in the context of symplectic lattices and a concrete increasing family of sets in 2n-dimensional Euclidean space. In particular, we show that this error term satisfies a central limit theorem as the volumes of the sets tend to infinity. Moreover, we obtain new Lp bounds on a height function on the space of symplectic lattices originally introduced by Schmidt. In Paper III, we follow Waxman and study a family of L-functions associated to angular Hecke characters on imaginary quadratic number fields of class number 1. We obtain asymptotic expressions for the 1-level density of the low-lying zeros in the family, both unconditionally and conditionally (under the assumption of the Grand Riemann Hypothesis and the Ratios Conjecture). Our results verify the Katz--Sarnak Density Conjecture in a special case for our family of L-functions.Item Mathematical Modelling of Cellular Ageing: a Multi-Scale Perspective(2022-04-19) Schnitzer, BarbaraIn a growing and increasingly older population, we are progressively challenged by the impact of ageing on individuals and society. The UN declared the years 2021-2030 as the Decade of Healthy Ageing, highlighting the efforts to minimise the burden of ageing and age-related diseases. A crucial step towards this goal is to elucidate basic underlying mechanisms on a molecular and cellular level. While much is known about individual hallmarks of cellular ageing, their interactive and multi-scale nature hinders the progress in gaining deeper insights into the emergent effects on an organism. In the five papers underlying this thesis, we aimed to study protein damage accumulation over successive cell divisions (replicative ageing), as one emergent factor defining ageing. We combined experimental data in the unicellular model organism yeast Saccharomyces cerevisiae with mathematical modelling, which offers systematic and formal ways of analysing the complexity that arises from the interplay between processes on different time and length scales. In that way, we showed how interconnections in the cellular signalling network are essential to ensure a robust adaption to stress on a short time scale, being crucial for preventing and handling protein damage. By linking different models for cellular signalling, metabolism and protein damage accumulation, we provided one of the most comprehensive mathematical models of replicative ageing to date. The model allowed us to map metabolic changes during ageing to a dynamic trade-off between protein availability and energy demand, and to investigate global metabolic strategies underlying cellular ageing. Going beyond single-cell models, we examined the synergy between processes that create, retain and repair protein damage, balancing the health of individual cells with the viability of the cell population. Taken together, by constructing, validating and using mathematical models, we unified different scales of protein damage accumulation and explored its causes and consequences. Thus, this thesis contributes to a more comprehensive understanding of cellular ageing, taking a step further towards healthy ageing.Item Mathematical Reasoning - In physics and real-life context(2015-05-05) Johansson, HelenaThis thesis is a compilation of four papers in which mathematical reasoning is examined in various contexts, in which mathematics is an integral part. It is known from previous studies that a focus on rote learning and procedural mathematical reasoning hamper students’ learning of mathematics. The aims of this thesis are to explore how mathematical reasoning affects upper secondary students’ possibilities to master the physics curricula, and how real-life contexts in mathematics affect students’ mathematical reasoning. This is done by analysing the mathematical reasoning requirements in Swedish national physics tests; as well as by examining how mathematical reasoning affects students’ success on the tests/tasks. Furthermore, the possible effect of the presence of real-life contexts in Swedish national mathematics tasks on students’ success is explored; as well as if the effect differs when account is taken to mathematical reasoning requirements. The framework that is used for categorising mathematical reasoning, distinguishes between imitative and creative mathematical reasoning, where the latter, in particular, involves reasoning based on intrinsic properties. Data consisted of ten Swedish national physics tests for upper secondary school, with additional student data for eight of the tests; and six Swedish national mathematics tests for upper secondary school, with additional student data. Both qualitative and quantitative methods were used in the analyses. The qualitative analysis consisted of structured comparisons between representative student solutions and the students’ educational history. Furthermore, various descriptive statistics and significance tests were used. The main results are that a majority of the physics tasks require mathematical reasoning, and particularly that creative mathematical reasoning is required to fully master the physics curricula. Moreover, the ability to reason mathematically creatively seems to have a positive effect on students’ success on physics tasks. The results indicate additionally, that there is an advantage of the presence of real-life context in mathematics tasks when creative mathematical reasoning is required. This advantage seems to be particularly notable for students with lower grades.Item Network modeling and integrative analysis of high-dimensional genomic data(2020-05-07) Kallus, JonatanGenomic data describe biological systems on the molecular level and are, due to the immense diversity of life, high-dimensional. Network modeling and integrative analysis are powerful methods to interpret genomic data. However, network modeling is limited by the requirement to select model complexity and due to a bias towards biologically unrealistic network structures. Furthermore, there is a need to be able to integratively analyze data sets describing a wider range of different biological aspects, studies and groups of subjects. This thesis aims to address these challenges by using resampling to control the false discovery rate (FDR) of edges, by combining resampling-based network modeling with a biologically realistic assumption on the structure and by increasing the richness of data sets that can be accommodated in integrative analysis, while facilitating the interpretation of results. In paper I, a statistical model for the number of times each edge is included in network estimates across resamples is proposed, to allow for estimation of how the FDR is affected by sparsity. Accuracy is improved compared to state-of-the-art methods, and in a network estimated for cancer data all hub genes have documented cancer-related functions. In paper II, a new method for integrative analysis is proposed. The method, based on matrix factorization, introduces a versatile objective function that allows for the study of more complex data sets and easier interpretation of results. The power of the method as an explorative tool is demonstrated on a set of genomic data. In paper III, network estimation across resamples is combined with repeated community detection to compensate for the structural bias inherent in common network estimation methods. For estimation of the regulatory network in human cancer, this compensation leads to an increased overlap with a database of gene interactions. Software implementations of the presented methods have been published. The contributed methods further the understanding that can be gained from high-dimensional genomic data, and may thus help to devise new treatments and diagnostics for cancer and other diseases.