Department of Mathematical Sciences / Institutionen för matematiska vetenskaper
Permanent URI for this communityhttps://gupea-staging.ub.gu.se/handle/2077/17606
Browse
Browsing Department of Mathematical Sciences / Institutionen för matematiska vetenskaper by Title
Now showing 1 - 20 of 43
- Results Per Page
- Sort Options
Item A contribution to the design and analysis of phase III clinical trials(2013-11-08) Lisovskaja, VeraClinical trials are an established methodology for evaluation of the effects of a new medical treatment. These trials are usually divided into several phases, namely phase I through IV. The earlier phases (I and II) are relatively small and have a more exploratory nature. The later phase III is confirmatory and aims to demonstrate the efficacy and safety of the new treatment. This phase is the final one before the treatment is marketed, with phase IV consisting of post-marketing studies. Phase III is initiated only if the conductors of the clinical study judge that the evidence from earlier stages indicates clearly that the new treatment is effective. However, several studies performed in recent years show that this assessment is not always correct. Two papers written on the subject point out average attrition rates of around 45\% and 30\%. In other words, it is estimated that only around two thirds of the compounds that enter phase III finish it successfully. This thesis examines some of the possible ways of improving efficiency in phase III clinical trials. The thesis consists of four papers on various topics that touch this subject, these topics being adaptive designs (paper I), number of doses (paper II) and multiplicity correction procedures (papers III and IV). The first paper examines the properties of the so called dual test, which can be applied in adaptive designs with sample size re-estimation. This test serves as a safeguard against unreasonable conclusions that may otherwise arise if an adaptive design is used. However, there is a price of possible power loss as compared to the standard test that is applied in such situations. The dual test is evaluated by considering several scenarios where its use would be natural. In many cases the power loss is minimal or non-existing. The second paper considers the optimal number and placement of doses used in phase III, with the probability of success of the trial used as optimality criterion. One common way of designing phase III trials is to divide the patients into two groups, one group receiving the new drug and another a control. However, as is demonstrated in paper II, this approach will be inferior to a design with two different doses and a control if there is enough uncertainty in the dose-response model prior to the initiation of the trial. The last two papers study possible gain that results from optimization of the multiplicity correction procedure that is applied if more than one hypothesis is tested in the same trial. Two families of such procedures are considered. The first one, examined in paper III, consists of a combination of a weighted Bonferroni test statistic with the principle of closed testing. The second one, examined in paper IV, is based on combining the same principle with a "pooled" test statistic. Paper III demonstrates that optimizing a multiplicity testing procedure can lead to a significant power increase as compared to simpler, non-optimized, procedures. The optimization is performed with respect to expected utility, an approach that originates from decision theory. Paper IV examines the difference between the Bonferroni-based and the pooled-based multiplicity corrections, finding the latter to be superior to the former if the test statistics follow a known multivariate Normal distribution.Item A two-stage numerical procedure for an inverse scattering problem(Chalmers University of Technology and University of Gothenburg, 2015) Bondestam Malmberg, John; Department of Mathematical Sciences, Chalmers University of Technology and University of GothenburgIn this thesis we study a numerical procedure for the solution of the inverse problem of reconstructing location, shape and material properties (in particular refractive indices) of scatterers located in a known background medium. The data consist of time-resolved backscattered radar signals from a single source position. This relatively small amount of data and the ill-posed nature of the inversion are the main challenges of the problem. Mathematically, the problem is formulated as a coefficient inverse problem for a system of partial differential equations derived from Maxwell’s equations. The numerical procedure is divided into two stages. In the first stage, a good initial approximation for the unknown coefficient is computed by an approximately globally convergent algorithm. This initial approximation is refined in the second stage, where an adaptive finite element method is employed to minimize a Tikhonov functional. An important tool for the second stage is a posteriori error estimates – estimates in terms of known (computed) quantities – for the difference between the computed coefficient and the true minimizing coefficient. This thesis includes four papers. In the first two, the a posteriori error analysis required for the adaptive finite element method in the second stage is extended from the previously existing indirect error estimators to direct ones. The last two papers concern verification of the two-stage numerical procedure on experimental data. We find that location and material properties of scatterers are obtained already in the first stage, while shapes are significantly improved in the second stage.Item Asymptotics and dynamics in first-passage and continuum percolation(2011-09-06) Ahlberg, DanielThis thesis combines the study of asymptotic properties of percolation processes with various dynamical concepts. First-passage percolation is a model for the spatial propagation of a fluid on a discrete structure; the Shape Theorem describes its almost sure convergence towards an asymptotic shape, when considered on the square (or cubic) lattice. Asking how percolation structures are affected by simple dynamics or small perturbations presents a dynamical aspect. Such questions were previously studied for discrete processes; here, sensitivity to noise is studied in continuum percolation. Paper I studies first-passage percolation on certain 1-dimensional graphs. It is found that when identifying a suitable renewal sequence, its asymptotic behaviour is much better understood compared to higher dimensional cases. Several analogues of classical 1-dimensional limit theorems are derived. Paper II is dedicated to the Shape Theorem itself. It is shown that the convergence, apart from holding almost surely and in L^1, also holds completely. In addition, inspired by dynamical percolation and dynamical versions of classical limit theorems, the almost sure convergence is proved to be dynamically stable. Finally, a third generalization of the Shape Theorem shows that the above conclusions also hold for first-passage percolation on certain cone-like subgraphs of the lattice. Paper III proves that percolation crossings in the Poisson Boolean model, also known as the Gilbert disc model, are noise sensitive. The approach taken generalizes a method introduced by Benjamini, Kalai and Schramm. A key ingredient in the argument is an extremal result on arbitrary hypergraphs, which is used to show that almost no information about the critical process is obtained when conditioning on a denser Poisson process.Item Combinatorics of solvable lattice models with a reflecting end(2021-04-21) Hietala, LinneaI den här avhandlingen studerar vi några exakt lösbara, kvantintegrerbara gittermodeller. Izergin bevisade en determinantformel för partitionsfunktionen till sexvertexmodellen på ett gitter av storlek n × n med Korepins domänväggrandvillkor (domain wall boundary conditions – DWBC). Metoden har blivit ett användbart verktyg för att studera partitionsfunktionen för liknande modeller. Determinantformeln har också visat sig vara användbar för att lösa andra typer av problem. Genom att specialisera parametrarna i Izergins determinantformel kunde Kuperberg hitta en formel för antalet alternerande tecken-matriser (alternating sign matrices – ASMs). Bazhanov och Mangazeev introducerade speciella polynom, bland annat pn och qn, som kan användas för att uttrycka speciella komponenter av egenvektorerna för grundtillstånden till den supersymmetriska XYZ-spinnkedjan av udda längd. I artikel I hittar vi explicita kombinatoriska uttryck för polynomen qn i termer av trefärgsmodellen med DWBC och en (diagonal) reflekterande rand. Sambandet uppstår genom att specialisera parametrarna i partitionsfunktionen för den elliptiska sexvertexmodellen med DWBC och en (diagonal) reflekterande rand på Kuperbergs sätt. Som en följd av detta kan vi hitta resultat för trefärgsmodellen, exempelvis antalet tillstånd med ett givet antal rutor av varje färg. I artikel II studerar vi polynomen pn på ett liknande sätt. Kopplingen till den elliptiska sexvertexmodellen fås genom att specialisera alla parametrar utom en på Kuperbergs sätt. Genom att använda Izergin–Korepin-metoden i Artikel III hittar vi en determinantformel för partitionsfunktionen för den trigonometriska sexvertexmodellen med DWBC och en partiellt (triangulär) reflekterande rand på ett gitter av storlek 2n×m, där m ≤ n. Sedan använder vi Kuperbergs specialisering av parametrarna för att hitta ett explicit uttryck för antalet tillstånd av modellen som en determinant av Wilson-polynom. Vi kopplar också detta till en sorts ASM-liknande matriser. // In this thesis, we study some exactly solvable, quantum integrable lattice models. Izergin proved a determinant formula for the partition function of the six-vertex (6V) model on an n×n lattice with the domain wall boundary conditions (DWBC) of Korepin. The method has become a useful tool to study the partition functions of similar models. The determinant formula has also proved useful for seemingly unrelated questions. In particular, by specializing the parameters in Izergin’s determinant formula, Kuperberg was able to give a formula for the number of alternating sign matrices (ASMs). Bazhanov and Mangazeev introduced special polynomials, including pn and qn, that can be used to express certain ground state eigenvector components for the supersymmetric XYZ spin chain of odd length. In Paper I, we find explicit combinatorial expressions for the polynomials qn in terms of the three-color model with DWBC and a (diagonal) reflecting end. The connection emerges by specializing the parameters in the partition function of the eight-vertex solid-on-solid (8VSOS) model with DWBC and a (diagonal) reflecting end in Kuperberg’s way. As a consequence, we find results for the three-color model, including the number of states with a given number of faces of each color. In Paper II, we perform a similar study of the polynomials pn. To get the connection to the 8VSOS model, we specialize all parameters except one in Kuperberg’s way. By using the Izergin–Korepin method in Paper III, we find a determinant formula for the partition function of the trigonometric 6V model with DWBC and a partially (triangular) reflecting end on a 2n × m lattice, m ≤ n. Thereafter we use Kuperberg’s specialization of the parameters to find an explicit expression for the number of states of the model as a determinant of Wilson polynomials. We relate this to a type of ASM-like matrices.Item Differential forms and currents on non-reduced complex spaces with applications to divergent integrals and the dbar-equation(2020-12-16) Lennartsson, MattiasItem Efficient Adaptive Algorithms for an Electromagnetic Coefficient Inverse Problem(2017-06-08) Malmberg, John BondestamThis thesis comprises five scientific papers, all of which are focusing on the inverse problem of reconstructing a dielectric permittivity which may vary in space inside a given domain. The data for the reconstruction consist of time-domain observations of the electric field, resulting from a single incident wave, on a part of the boundary of the domain under consideration. The medium is assumed to be isotropic, non-magnetic, and non-conductive. We model the permittivity as a continuous function, and identify distinct objects by means of iso-surfaces at threshold values of the permittivity. Our reconstruction method is centred around the minimization of a Tikhonov functional, well known from the theory of ill-posed problems, where the minimization is performed in a Lagrangian framework inspired by optimal control theory for partial differential equations. Initial approximations for the regularization and minimization are obtained either by a so-called approximately globally convergent method, or by a (simpler but less rigorous) homogeneous background guess. The functions involved in the minimization are approximated with finite elements, or with a domain decomposition method with finite elements and finite differences. The computational meshes are refined adaptively with regard to the accuracy of the reconstructed permittivity, by means of an a posteriori error estimate derived in detail in the fourth paper. The method is tested with success on simulated as well as laboratory measured data.Item Efficient training of interpretable, non-linear regression models(2023-06-30) Allerbo, OskarRegression, the process of estimating functions from data, comes in many flavors. One of the most commonly used regression models is linear regression, which is computationally efficient and easy to interpret, but lacks in flexibility. Non-linear regression methods, such as kernel regression and artificial neural networks, tend to be much more flexible, but also harder to interpret and more difficult, and computationally heavy, to train. In the five papers of this thesis, different techniques for constructing regression models that combine flexibility with interpretability and computational efficiency, are investigated. In Papers I and II, sparsely regularized neural networks are used to obtain flexible, yet interpretable, models for additive modeling (Paper I) and dimensionality reduction (Paper II). Sparse regression, in the form of the elastic net, is also covered in Paper III, where the focus is on increased computational efficiency by replacing explicit regularization with iterative optimization and early stopping. In Paper IV, inspired by Jacobian regularization, we propose a computationally efficient method for bandwidth selection for kernel regression with the Gaussian kernel. Kernel regression is also the topic of Paper V, where we revisit efficient regularization through early stopping, by solving kernel regression iteratively. Using an iterative algorithm for kernel regression also enables changing the kernel during training, which we use to obtain a more flexible method, resembling the behavior of neural networks. In all five papers, the results are obtained by carefully selecting either the regularization strength or the bandwidth. Thus, in summary, this work contributes with new statistical methods for combining flexibility with interpretability and computational efficiency based on intelligent hyperparameter selection.Item Enabling mechanistic understanding of cellular dynamics through mathematical modelling and development of efficient methods(2024-11-13) Persson, SebastianCell biology is complex, but unravelling this complexity is important. For example, the recent COVID-19 pandemic highlighted the need to understand how cells function in order to develop efficient vaccines and treatments. However, studying cellular systems is challenging because they are often highly interconnected, dynamic and contain many redundant components. Mathematical modelling provides a powerful framework to reason about such complexity. In the four papers underlying this thesis, our aim was twofold.The first was to unravel mechanisms that regulate cellular dynamic behaviour in the model organism Saccharomyces cerevisiae. In particular, by developing single-cell dynamic models, we investigated how cells respond to changes in nutrient levels. We identified mechanisms behind the reaction dynamics and uncovered sources of cell-to-cell variability. Additionally, by developing reaction-diffusion modelling, we studied the size regulation of self-assembled structures and demonstrated how the interplay of feedback mechanisms can regulate structure size. Our second aim was to develop methods and software to facilitate efficient modelling. Modelling often involves fitting models to data to verify specific hypotheses, and it is beneficial if models inconsistent with the data can be discarded rapidly. To this end, we developed software for working with single-cell dynamic models that, in contrast to previous methods, imposes fewer restrictions on how cell-to-cell variability is modelled. Moreover, we developed and evaluated software for fitting population-average dynamic models to data. This software outperforms the current state of the art, and to make it accessible, we released it as two well-documented open-source packages. Taken together, this thesis sheds light on fundamental regulatory mechanisms and introduces software for efficient modelling.Item Exercising Mathematical Competence: Practising Representation Theory and Representing Mathematical Practice(2013-04-05) Säfström, Anna IdaThis thesis assembles two papers in mathematics and two papers in mathematics education. In the mathematics part, representation theory is practised. Two Clebsch-Gordan type problems are addressed. The original problem concerns the decomposition of the tensor product of two finite dimensional, irreducible highest way representations of $GL_{\mathbb{C}}(n)$. This problem is known to be equivalent with the characterisation of the eigenvalues of the sum of two Hermitian matrices. In this thesis, the method of moment maps and coadjoint orbits are used to find equivalence between the eigenvalue problem for skew-symmetric matrices and the tensor product decomposition in the case of $SO_{\mathbb{C}}(2k)$. In addition, some irreducible, infinite dimensional, unitary highest weight representations of $\mathfrak{gl}_{\mathbb{C}}(n+1)$ are determined. In the mathematics education part a framework is developed, offering a language and graphical tool for representing the exercising of competence in mathematical practices. The development sets out from another framework, where competence is defined in terms of mastery. Adjustments are made in order to increase the coherence of the framework, to relate the constructs to contemporary research and to enable analysis of the exercising of competence. These modifications result in two orthogonal sets of essential aspects of mathematical competence: five competencies and two aspects. The five competencies reflect different constituents of mathematical practice: representations, procedures, connections, reasoning and communication. The two aspects evince two different modes of the competencies: the productive and the analytic. The operationalisation of the framework gives rise to an analysis guide and a competency graph. The framework is applied to two sets of empirical data. In the first study, young children's exercising of competencies in handling whole numbers is analysed. The results show that the analytical tools are able to explain this mathematical practice from several angles: in relation to a specific concept, in a certain activity and how different representations may pervade procedures and interaction. The second study describes university students' exercising of competencies in a proving activity. The findings reveal that, while reasoning and the analytic aspect are significant in proving, the other competencies and the productive aspect play important roles as well. Combined, the two studies show that the framework have explanatory power for various mathematical practices. In light of this framework, this thesis exercises both aspects of mathematical competence: the productive aspect in representation theory and the analytic aspect in the development of the framework.Item Extreme rainfall modelling under climate change and proper scoring rules for extremes and inference(2024-09-06) Ólafsdóttir, Helga KristínModel development, model inference and model evaluation are three important cornerstones of statistical analysis. This thesis touches on all these through modelling extremes under climate change and evaluating extreme models using scoring rules, and by using scoring rules for statistical inference on spatial models. The findings are presented in three papers. In Paper I, a new statistical model is developed, that uses the connections between the generalised extreme value distribution and the generalised Pareto distribution to capture frequency changes in annual maxima. This allows using high-quality annual maxima data instead of less-well checked daily data to separately estimate trends in frequency and intensity. The model was applied to annual maximum data of Volume 10 of NOAA Atlas 14, showing that in the Northeastern US there are evidence that extreme rainfall events are occurring more often with rising temperature, but that there is little evidence that there are trends in the distribution of sizes of individual extreme rainfall events. Paper II introduces the concept of local weight-scale invariance which is a relaxation of local scale invariance for proper scoring rules. This relaxation is suitable for weighted scores that are for example useful when comparing extreme models. A weight-scale invariant version of the tail-weighted continuous ranked probability score is introduced and the properties of the different weighted scores were investigated. Finally, Paper III continues on the path of scoring rules, but instead uses scoring rules for statistical inference of spatial models. The proposed approach estimates parameters of spatial models by maximising the average leave-one-out cross-validation score (LOOS). The method results in fast computations for Gaussian models with sparse precision matrices and allows tailoring estimator's robustness to outliers and their sensitivity to spatial variations of uncertainty through the choice of the scoring rule which is used in the maximisation.Item Extreme Value Analysis of Huge Datasets: Tail Estimation Methods in High-Throughput Screening and Bioinformatics(2011-10-13) Dmitrii, ZholudThis thesis presents results in Extreme Value Theory with applications to High-Throughput Screening and Bioinformatics. The methods described here, however, are applicable to statistical analysis of huge datasets in general. The main results are covered in four papers. The first paper develops novel methods to handle false rejections in High-Throughput Screening experiments where testing is done at extreme significance levels, with low degrees of freedom, and when the true null distribution may differ from the theoretical one. We introduce efficient and accurate estimators of False Discovery Rate and related quantities, and provide methods of estimation of the true null distribution resulting from data preprocessing, as well as techniques to compare it with the theoretical null distribution. Extreme Value Statistics provides a natural analysis tool: a simple polynomial model for the tail of the distribution of p-values. We exhibit the properties of the estimators of the parameters of the model, and point to model checking tools, both for independent and dependent data. The methods are tried out on two large scale genomic studies and on an fMRI brain scan experiment. The second paper gives a strict mathematical basis for the above methods. We present asymptotic formulas for the distribution tails of probably the most commonly used statistical tests under non-normality, dependence, and non-homogeneity, and derive bounds on the absolute and relative errors of the approximations. In papers three and four we study high-level excursions of the Shepp statistic for the Wiener process and for a Gaussian random walk. The application areas include finance and insurance, and sequence alignment scoring and database searches in Bioinformatics.Item Geometrical and percolative properties of spatially correlated models(2020-03-10) Hallqvist Elias, Karl OlofThis thesis consists of four papers dealing with phase transitions in various models of continuum percolation. These models exhibit complicated dependencies and are generated by different Poisson processes. For each such process there is a parameter, known as the intensity, governing its behavior. By varying the value of this parameter, the geometrical and topological properties of these models may undergo dramatic and rapid changes. This phenomenon is called a phase transition and the value at which the change occur is called a critical value. In Paper I, we study the topic of visibility in the vacant set of the Brownian interlacements in Euclidean space and the Brownian excursions process in the unit disc. For the vacant set of the Brownian interlacements we obtain upper and lower bounds of the probability of having visibility in some direction to a distance r in terms of the probability of having visibility in a fixed direction of distance r. For the vacant set of the Brownian excursions we prove a phase transition in terms of visibility to infinity (with respect to the hyperbolic metric). We also determine the critical value and show that at the critical value there is no visibility to infinity. In Paper II we compute the critical value for percolation in the vacant set of the Brownian excursions process. We also show that the Brownian excursions process is a hyperbolic analogue of the Brownian interlacements. In Paper III, we study the vacant set of a semi scale invariant version of the Poisson cylinder model. In this model it turns out that the vacant set is a fractal. We determine the critical value for the so-called existence phase transition and what happens at the critical value. We also compute the Hausdorff dimension of the fractal whenever it exists. Furthermore, we prove that the fractal exhibits a nontrivial connectivity phase transition for dimensions four and greater and that the fractal is totally disconnected for dimension two. In the three dimensional case we prove a partial result showing that the fractal restricted to a plane is totally disconnected with probability one. In Paper IV we study a continuum percolation process, the random ellipsoid model, generated by taking the intersection of a Poisson cylinder model in d dimensions and a subspace of dimension k. For k between 2 and d-2, we show that there is a non-trivial phase transition concerning the expected number of ellipsoids in the cluster of the origin. When k=d-1 this critical value is zero. We compare these results with results for the classical Poisson Boolean model.Item Global residue currents and the Ext functors(2022-09-09) Johansson, JimmyThis thesis concerns developments in multivariable residue theory. In particular we consider global constructions of residue currents related to work by Andersson and Wulcan. In the first paper of this thesis, we consider global residue currents defined on projective space, and we show that these currents provide a tool for studying polynomial interpolation. Polynomial interpolation is related to local cohomology, and by a result known as local duality, there is a close connection with certain Ext groups. The second paper of this thesis is devoted to further study of connections between residue currents and the Ext functors. The main result is that we construct a global residue current on a complex manifold, and using this we give an explicit formula for an isomorphism of two different representations of the global Ext groups on complex manifolds.Item Hodge Theory in Combinatorics and Mirror Symmetry(2024-10-22) Pochekai, MykolaHodge theory, in its broadest sense, encompasses the study of the decomposition of cohomology groups of complex manifolds, as well as related fields such as periods, motives, and algebraic cycles. In this thesis, ideas from Hodge theory have been incorporated into two seemingly unrelated projects, namely mathematical mirror symmetry and combinatorics. Papers I-II explore an instance of genus one mirror symmetry for the complete intersection of two cubics in five-dimensional projective space. The mirror family for this complete intersection is constructed, and it is demonstrated that the BCOV-invariant of the mirror family is related to the genus one Gromov-Witten invariants of the complete intersection of two cubic. This proves new cases of genus one mirror symmetry. Paper III defines Hodge-theoretic structures on triangulations of a special type. It is shown that if a polytope admits a regular, unimodular triangulation with a particular additional property, its $\delta$-vector from Ehrhart theory is unimodal.Item Hyperuniformity and Hyperfluctuations for Random Measures on Euclidean and Non-Euclidean spaces(2025-05-05) Byléhn, MattiasIn this thesis we study fluctuations of generic random point configurations in Euclidean and symmetric curved geometries. Mathematically, such configurations are interpreted as isometrically invariant point processes, and fluctuations are recorded by the variance of the number of points in a centered ball, the number variance. Hyperuniformity and hyperfluctuation of such configurations in the sense of Stillinger-Torquato is characterized in terms of large-scale asymptotics of the number variance in relation to that of an ideal gas, and equivalently by small-scale asymptotics of the Bartlett spectral measure in the diffraction picture. Appended to the Thesis are three papers: In Paper I we provide lower asymptotic bounds for number variances of isometrically invariant random measures in Euclidean and hyperbolic spaces, generalizing a result by Beck. In particular, we find that geometric hyperuniformity fails for every isometrically invariant random measure on hyperbolic space. In contrast to this, we define a notion of spectral hyperuniformity which is satisfied by certain invariant random lattice configura- tions. In Paper II we establish similar lower asymptotic bounds for number variances of automorphism invariant point processes in regular trees. The main result is that these lower bounds are not uniform for the invariant random lattice configurations defined by the fundamental groups of complete regular graphs and the Petersen graph. We also provide a criterion for when these lower bounds are uniform in terms of certain rational peaks appearing in the diffraction picture. In Paper III we prove the existence and uniqueness of Bartlett spectral measures for invariant random measures on a large class of non-compact commutative spaces, which includes those in Papers I and II. For higher rank symmetric spaces governed by simple Lie groups, we prove that there is a power strictly less than 2 of the volume of balls that asymptotically bounds the number variance of any invariant random measure from above. Moreover, we derive Bartlett spectral measures for invariant determinantal point processes on commutative spaces and define a notion of heat kernel hyperuniformity on Euclidean and hyperbolic spaces that is equivalent to spectral hyperuniformity.Item Index theory in geometry and physics(2011-04-06) Goffeng, MagnusThis thesis contains three papers in the area of index theory and its applications in geometry and mathematical physics. These papers deal with the problems of calculating the charge deficiency on the Landau levels and that of finding explicit analytic formulas for mapping degrees of Hölder continuous mappings. The first paper deals with charge deficiencies on the Landau levels for non-interacting particles in R^2 under a constant magnetic field, or equivalently, one particle moving in a constant magnetic field in even-dimensional Euclidian space. The K-homology class that the charge of a Landau level defines is calculated in two steps. The first step is to show that the charge deficiencies are the same on every particular Landau level. The second step is to show that the lowest Landau level, which is equivalent to the Fock space, defines the same class as the K-homology class on the sphere defined by the Toeplitz operators in the Bergman space of the unit ball. The second and third paper uses regularization of index formulas in cyclic cohomology to produce analytic formulas for the degree of Hölder continuous mappings. In the second paper Toeplitz operators and Henkin-Ramirez kernels are used to find analytic formulas for the degree of a function from the boundary a relatively compact strictly pseudo-convex domain in a Stein manifold to a compact connected oriented manifold. In the third paper analytic formulas for Hölder continuous mappings between general even-dimensional manifolds are produced using a pseudo-differential operator associated with the signature operator.Item The influence of numbers when students solve equations(2023) Holmlund, AnnaIs it possible that some students’ primary difficulty with equation-solving is neither handling the literal symbols nor the equality, but the numbers used as coefficients? It is well known that many students find algebra a difficult topic, and there is much research on how students experience this strand of mathematics, with indications of how it can be taught. Still, a perspective not often fronted in this research – that has been suggested as an area potentially important – is how numbers, other than natural numbers, in algebra, are perceived by students. Such kinds of numbers (negative numbers and decimal fractions) have been used in this thesis to explore how the numbers influence students’ equation-solving. Two studies with a phenomenographic approach have explored how students (n1=5, n2=23) perceive linear equations of similar structure but with different kinds of numbers as coefficients, e.g., 819 = 39 ∙ 𝑥 and 0.12 = 0.4 ∙ 𝑥. In the second study, a test was also used to investigate the magnitude of the influence of a change of coefficients for 110 students while solving equations with a calculator. The findings show that equations with decimal fractions and negative numbers are less likely to be solved by these students, and decimal fractions as coefficients can even make a student unable to recognize a kind of equation they just solved with natural numbers. The interviews display that, depending on the number in a linear equation, some students focus on different aspects of the equation, and that the numbers influence what meaning the students see in the equation and how they can justify their solution. Following the phenomenographic approach, differences in the way that students experience the equations were specified, and critical aspects were formulated. This implies a wider use of different kinds of numbers in teaching algebra, as different kinds of numbers hold different challenges, thereby also varying learning potential, for students.Item Learning to solve problems that you have not learned to solve: Strategies in mathematical problem solving(2019-08-16) Fülöp, ÉvaThis thesis aims to contribute to a deeper understanding of the relationship between problem-solving strategies and success in mathematical problem solving. In its introductory part, it pursues and describes the term strategy in mathematics and discusses its relationship to the method and algorithm concepts. Through these concepts, we identify three decision-making levels in the problem-solving process. The first two parts of this thesis are two different studies analysing how students’ problem-solving ability is affected by learning of problem-solving strategies in mathematics. We investigated the effects of variation theory-based instructional design in teaching problem-solving strategies within a regular classroom. This was done by analysing a pre- and a post-test to compare the development of an experimental group’s and a control group’s knowledge of mathematics in general and problem-solving ability in particular. The analysis of the test results show that these designed activities improve students’ problem-solving ability without compromising their progress in mathematics in general. The third study in this thesis aims to give a better understanding of the role and use of strategies in the mathematical problem-solving processes. By analysing 79 upper secondary school students’ written solutions, we were able to identify decisions made at all three levels and how knowledge in these levels affected students’ problem-solving successes. The results show that students who could view the problem as a whole while keeping the sub-problems in mind simultaneously had the best chances of succeeding. In summary, we have in the appended papers shown that teaching problem-solving strategies could be integrated in the mathematics teaching practice to improve students mathematical problem-solving abilities.Item Limit Theorems for Lattices and L-functionsHolm, KristianThis PhD thesis investigates distributional questions related to three types of objects: Unimodular lattices, symplectic lattices, and Hecke L-functions of imaginary quadratic number fields of class number 1. In Paper I, we follow Södergren and examine the asymptotic joint distribution of a collection of random variables arising as geometric attributes of the N = N(n) shortest non-zero lattice vectors (up to sign) in a random unimodular lattice in n-dimensional Euclidean space, as the dimension n tends to infinity: Normalizations of the lengths of these vectors, and normalizations of the angles between them. We prove that under suitable conditions on N, this collection of random variables is asymptotically distributed like the first N arrival times of a Poisson process of intensity 1/2 and a collection of positive standard Gaußians. This generalizes previous work of Södergren. In Paper II, we use methods developed by Björklund and Gorodnik to study the error term in a classical lattice point counting asymptotic due to Schmidt in the context of symplectic lattices and a concrete increasing family of sets in 2n-dimensional Euclidean space. In particular, we show that this error term satisfies a central limit theorem as the volumes of the sets tend to infinity. Moreover, we obtain new Lp bounds on a height function on the space of symplectic lattices originally introduced by Schmidt. In Paper III, we follow Waxman and study a family of L-functions associated to angular Hecke characters on imaginary quadratic number fields of class number 1. We obtain asymptotic expressions for the 1-level density of the low-lying zeros in the family, both unconditionally and conditionally (under the assumption of the Grand Riemann Hypothesis and the Ratios Conjecture). Our results verify the Katz--Sarnak Density Conjecture in a special case for our family of L-functions.Item Matematisk och pedagogisk kunskap – Lärarstudenters uppfattningar av begreppen funktion och variabel(2017) Borke, Mikael; Matematiska Vetenskaper Chalmers Tekniska Högskola och Göteborgs Universitet
- «
- 1 (current)
- 2
- 3
- »