Masteruppsatser
Permanent URI for this collectionhttps://gupea-staging.ub.gu.se/handle/2077/28887
Browse
Browsing Masteruppsatser by Title
Now showing 1 - 20 of 81
- Results Per Page
- Sort Options
Item A Wave Propagation Solver for Computational Aero-Acoustics(2012-03-14) Solberg, Elin; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperSimulation software is increasingly replacing traditional physical testing in the process of product development, as it can in many cases reduce development times and costs. In a variety of applications, the reduction of noise is an important aspect of the product design and using methods from the field of computational aero-acoustics (CAA), the generation and propagation of sound in air may be simulated. In this project, a FEM-based solver for the three-dimensional Helmholtz equation, modeling the propagation of sound waves, has been developed and tested. The implementation includes Galerkin/leastsquares stabilization. Both interior and exterior problems are handled; the latter by a coupled finite-infinite element method. Further, using a hybrid CAA methodology the solver may be coupled to a CFD solver, to simulate the sound arising from transient fluid flows. The solver has been tested, and observed to perform well, on a set of interior and exterior problems. Results are presented for three cases of increasing complexity: first an interior, homogeneous problem with a known analytical solution, second an exterior problem with point sources and third an exterior problem with acoustic sources from a CFD computation, i.e. a full hybrid CAA simulation. In the two latter cases, the frequencies at which standing waves appear in a pipe and a deep cavity, respectively, are compared to theoretically computed values, and are seen to be well captured by the simulations. Moreover, the results of the full CAA simulation are compared to experimental data, to which they show good resemblance. The mathematical model, numerical methods and implementation are presented in the report along with numerical results.Item Always Look on the Positive-Definite Side of Life(2020-11-24) Byléhn, Mattias; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThis thesis concerns distributions on Rn with the property of being positive-definite relative to a finite subgroup of the orthogonal group O(n). We construct examples of such distributions as the inverse Abel transform of Dirac combs on the geometries of Euclidean space Rn and the real- and complex hyperbolic plane H2, H2 C. In the case of R3 we obtain Guinand’s distribution as the inverse Abel transform of the Dirac comb on the standard lattice Z3 Ç R3. The main theorem of the paper is due to Bopp, Gelfand-Vilenkin and Krein, stating that a distribution on Rn is positive-definite relative to a finite subgroup W Ç O(n) if and only if it is the Fourier transform of a positive W-invariant Radon measure on n z 2 Cn : z 2W.z o ½ Cn . We present Bopp’s proof of this theorem using a version of the Plancherel-Godement theorem for complex commutative ¤-algebras.Item An argument principle for generalised point residues(2021-08-24) Nkunzimana, Rahim; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperWe define a point residue for any Artinian O-modules via Hermitian free resolutions, generalising the one dimensional residue and the classical multivariate Grothendieck point residue. We consider various definitions of multiplicity for such modules and prove a residue formula connecting the algebraic multiplicity to our residues. Our result can be seen as a generalisation both of the argument principle and a corresponding result for Grothendieck residues. It is also a special case of a recent result for Andersson-Wulcan currents proven by Lärkäng and Wulcan.Item An exploratory machine learning workflow for the analysis of adverse events from clinical trials(2020-06-26) Carlerös, Margareta; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperA new pharmaceutical drug needs to be shown to be safe and effective before it can be used to treat patients. Adverse events (AEs) are potential side-effects that are recorded during clinical trials, in which a new drug is tested in humans, and may or may not be related to the drug under study. The large diversity of AEs and the often low incidence of each AE reported during clinical trials makes traditional statistical testing challenging due to problems with multiple testing and insufficient power. Therefore, analysis of AEs from clinical trials currently relies mainly on manual review of descriptive statistics. The aim of this thesis was to develop an exploratory machine learning approach for the objective analysis of AEs in two steps, where possibly drug-related AEs are identified in the first step and patient subgroups potentially having an increased risk of experiencing a particular drug sideeffect are identified in the second step. Using clinical trial data from a drug with a well-characterized safety profile, the machine learning methodology demonstrated high sensitivity in identifying drug-related AEs and correctly classified several AEs as being linked to the underlying disease. Furthermore, in the second step of the analysis, the model suggested factors that could be associated with an increased risk of experiencing a particular side-effect, however a number of these factors appeared to be general risk factors for developing the AE independent of treatment. As the method only identifies associations, the results should be considered hypothesisgenerating. The exploratory machine learning workflow developed in this thesis could serve as a complementary tool which could help guide subsequent manual analysis of AEs, but requires further validation before being put into practice.Item Artificial Intelligence for Option Pricing(2022-06-19) Hietanen, Emil; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThis thesis addresses the issue of vulnerable underlying assumptions used in option pricing methodology. More precisely; underlying assumptions made on the financial assets and markets make option pricing theory vulnerable to changes in the financial framework. To enhance the robustness of option pricing, an alternative approach using artificial intelligence is introduced. Artificial intelligence is an advantageous tool for pricing financial assets and instruments, in particular; the use of deep neural networks as one does not have to make any assumptions. Instead, the neural network learns the underlying patterns of the asset and market directly from the input data. To test the proposed pricing alternative, an error metrical analysis, a log-returns distribution fit, and a volatility-smile fit is performed. Four mathematical option pricing models are used as reference models; Black–Scholes, Merton jump-diffusion model, Heston stochastic volatility model and Bates stochastic volatility with jumps. In addition, three types of neural networks are used; multilayer perceptron (MLP), long short-term memory (LSTM), and convolutional neural network (CNN). All methods included in the thesis require some predefined set of parameters, therefore, a parameter calibration method is required. A non-linear least square method can be used for cases where the number of combinations is sufficiently small. However, as the possible number of parameter combinations increases, the method becomes too computationally heavy. To combat this, an evolutionary reinforcement machine learning algorithm is introduced to find a set of calibrated parameters in a more efficient approach. First versions of option pricing neural networks show great promise, with significantly better results than the reference models. In addition, the networks show good coherence to existing stylized facts of options, in terms of the empirical frequency distribution of log-returns and volatility smile fit.Item Audio Anomaly Detection in Cars(2023-09-11) Hussein, Asma; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperAudio anomaly detection in the context of car driving is a crucial task for ensuring vehicle safety and identifying potential faults. This paper aims to investigate and compare different methods for unsupervised audio anomaly detection using a data set consisting of recorded audio data from fault injections and normal "no fault" driving. The feature space used in the final modelling consisted of: CENS (Chroma energy normalized Statistic), LMFE (Log Mel Frequency Energy), and MFCC (Mel-frequency cepstral coefficients) features. These features exhibit promising capabilities in distinguishing between normal and abnormal classes. Notably, the CENS features which revealed specific pitch classes contribute to the distinguishing characteristics of abnormal sounds. Four Machine learning methods were tested to evaluate the performance of different models for audio anomaly detection: Isolation Forest , One-Class Support Vector Machines, Local Outlier Factor, and Long Short-Term Memory Autoencoder. These models are applied to the extracted feature space, and their respective performance was assessed using metrics such as ROC curves, AUC scores, PR curves, and AP scores. The final results demonstrate that all four models perform well in detecting audio anomalies in cars, where LOF and LSTM-AE achieve the highest AUC scores of 0.98, while OCSVM and IF exhibit AUC scores of 0.97. However, LSTM-AE displays a lower average precision score due to a significant drop in precision beyond a certain reconstruction error threshold, particularly for the normal class. This study demonstrates the effectiveness of Mel frequency and chroma features in modelling for audio anomaly detection in car and shows great potential for further research and development of effective anomaly detection systems in automotive applications.Item Bisc, a biclustering extension to scregclust(2025-04-14) Franzén, Sebastian; Birve, Filip; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperItem Cluster KL-UCB: Optimism for the Best, Pessimism for the Rest(2022-06-28) Lööf, Emelie; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThe project presents an allocation strategy for the stochastic multi armed bandit when considering instances with a clustered structure. Using the architecture of the KL-UCB policy as a source of inspiration, an algorithm which exploits and takes advantage from a clustered structure is derived. Firstly, encouraged by previous work related to the subject, a multi-level structure approach will constitute as an initial examination. Secondly, the Cluster KL-UCB policy will be derived and evaluated considering three di erent approaches. It will be shown, both theoretically and empirically, that adapting to a clustered environment improves the performance compared to its non cluster-adapting ancestor. Both upper and lower bounds on the regret will be provided in order to theoretically ensure the performance of the algorithm. Lastly, a number of empirical experiments will be performed in order to further ensure the performance and validate the theoretical results.Item Convolutional neural networks for semantic segmentation of FIB-SEM volumetric image data(2020-11-26) Skärberg, Fredrik; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperFocused ion beam scanning electron microscopy (FIB-SEM) is a well-established microscopy technique for 3D imaging of porous materials. We investigate three porous samples of ethyl cellulose microporous films made from ethyl cellulose and hydroxypropyl cellulose (EC/HPC) polymer blends. These types of polymer blends are used as coating materials on various pharmaceutical tablets or pellets and form a continuous network of pores in the film. Understanding the microstructures of these porous networks allow for controlling drug release. We perform semantic segmentation of the image data, separating the solid parts of the material from the pores to accurately quantify the microstructures in terms of porosity. Segmentation of FIB-SEM data is complicated because in each 2D slice there is 2.5D information, due to parts of deeper underlying cross-sections shining through in porous areas. The supposed shine-through effect greatly complicates the segmentation in regards to two factors; uncertainty in the positioning of the microstructural features and overlapping grayscale intensities between pore and solid regions. In this work, we explore different convolutional neural networks (CNNs) for pixelwise classification of FIB-SEM data, where the class of each pixel is predicted using a three-dimensional neighborhood of size (nx; ny; nz). In total, we investigate six types of CNN architectures with different hyperparameters, dimensionalities, and inputs. For assessing the classification performance we consider the mean intersection over union (mIoU), also called Jaccard index. All the investigated CNNs are well suited to the problem and perform good segmentations of the FIB-SEM data. The so-called standard 2DCNN performs the best overall followed by different varieties of 2D and 3D CNN architectures. The best performing models utilize larger neighborhoods, and there is a clear trend that larger neighborhoods boost performance. Our proposed method improves results on all metrics by 1.35 - 3.14 % compared to a previously developed method for the same data using Gaussian scale-space features and a random forest classifier. The porosities for the three HPC samples are estimated to 20.34, 33.51, and 45.75 %, which is in close agreement with the expected porosities of 22, 30, and 45 %. Interesting future work would be to let multiple experts segment the same image to obtain more accurate ground truths, to investigate loss functions that better correlate with the porosity, and to consider other neighborhood sizes. Ensemble learning methods could potentially boost results even further, by utilizing multiple CNNs and/or other machine learning models together.Item Credit Card Fraud Detection by Nearest Neighbor Algorithms(2023-04-13) Maghsood, Ramin; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperAs the usage of internet banking and online purchases have increased dramatically in today’s world, the risk of fraudulent activities and the number of fraud cases are increasing day by day. The most frequent type of bank fraud in recent years is credit card fraud which leads to huge financial losses on a global level. Credit card fraud happens when an unauthorized person uses another person’s credit card information to make purchases. Credit card fraud is an important and increasing problem for banks and individuals, all around the world. This thesis applies supervised and unsupervised nearest neighbor algorithms for fraud detection on a Kaggle data set consisting of 284,807 credit card transactions out of which 492 are frauds, and which includes 30 covariates per transaction. The supervised methods are shown to be quite efficient, but require that the user has access to labelled training data where one knows which transactions are frauds. Unsupervised detection is harder and, e.g., for finding 80% of the frauds, the algorithm classifies more 50 times as many valid transactions as fraud cases. The unsupervised nearest neighbor distance method is compared to methods using the distance to the center of the data for fraud detection, and detection algorithms which combine the two methods. The L2 distance and L2 distance to zero and the combination of both distances are analyzed for unsupervised method. The performance of the methods is evaluated by the Precision-Recall (PR) curves. The results show that based on both area under curve and precision at 80% recall, L2 distance to zero performs slightly better than L2 distance.Item Curve fitting with confidence for preclinical dose-response data(2015-02-17) Cardner, Mathias; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperAbstract. In the preclinical stage of pharmaceutical drug development, when investigating the medicinal properties of a new compound, there are two important questions to address. The first question is simply whether the compound has a significant beneficial effect compared to vehicle (placebo) or reference treatments. The second question concerns the more nuanced dose–response relationship of the compound of interest. One of the aims of this thesis is to design an experiment appropriate for addressing both of these questions simultaneously. Another goal is to make this design optimal, meaning that dose-levels and sample sizes are arranged in a manner which maximises the amount of information gained from the experiment. We implement a method for assessing efficacy (the first question) in a modelling environment by basing inference on the confidence band of a regression curve. The verdicts of this method are compared to those of one-way anova coupled with the multiple comparison procedure Dunnett’s test. When applied to our empirical data sets, the two methods are in perfect agreement regarding which dose-levels have an effect at the 5% significance level. Through simulation, we find that our modelling approach tends to be more conservative than Dunnett’s test in trials with few dose-levels, and vice versa in trials with many dose-levels. Furthermore, we investigate the effect of optimally designing the simulated trials, and also the consequences of misspecifying the underlying dose–response model during regression, in order to assess the robustness of the implemented method.Item Data Augmentation for Point Process Learning(2024-07-03) Rost, Mathis; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThis thesis introduces and evaluates ideas for the use of data augmentation in the area of Point Process Learning. Motivated by the regularizing effect of training with augmented data sets, we create a follow-up work to the paper ”A cross-validation-based statistical theory for point processes” by Cronie et al. [2023]. We develop methods for applying data augmentation to point process data. We discuss the possibilities of augmenting the existing process with additional data points generated by a noise process or by moving the already existing points in space. The developed methods are applied to common point process models, like the hard-core process and the area interaction process. The augmented data is then used for inference. The performed simulation study, where different discussed options are applied, shows promising results. The regularizing effect of data augmentation can be observed and thus motivates further investigation in to this topic.Item Decision Making Under Uncertainty: A Robust Optimization(2014-10-02) Androulakis, Emmanouil; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperItem Decision Policies for Early Stage Clinical Trials with Multiple Endpoints(2022-11-11) López Juan, Víctor; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperBefore a drug can be prescribed to patients, it must be shown to be safe and effective for a certain indication in a controlled clinical trial (known as Phase III). Such studies are costly to run and expose patients to potential risks. Therefore, after initial studies in human subjects show the drug’s safety (Phase I), studies with a small number of patients are run to assess the prospects of the drug (Phase II). If the number of patients in a Phase II study is not be sufficient to detect differences in the variable of interest (e.g. number of hospitalizations due to heart failure); a surrogate variable which is predictive of the variable of interest is used instead. A decision framework originally proposed by Lalonde (2007) is used in industry to determine, based on a single surrogate endpoint, whether to “Go” ahead with a Phase III study, or to “Stop” development of the drug. In some therapeutic areas, a single endpoint is not sufficient to predict the Phase III variable of interest; several related endpoints are used instead. Endpoints which are considered clinically related may be grouped into domains. How to best combine several disease markers across different domains to achieve the desired probabilities of correct/incorrect decisions is an open question. This report presents an extension to multiple endpoints of the decision framework proposed by Lalonde. In this extension, decision policies are formulated in two levels. First, a Go or Stop decision is made for each domain, for example by individually comparing each of the relevant endpoints to certain thresholds. Performing multiple comparisons heightens the risk of an incorrect Go decision. This risk can be controlled effectively by using the Simes procedure (1986), which is a special case of the Benjamini-Hochberg (1995) method. Domain-level decisions are then combined into policies fulfilling a monotonicity property. This property enables the calculation of upper bounds for the probability of an incorrect decision, and lower bounds for the probability of a correct decision. These calculations are performed both for purely synthetic endpoints and for a case study involving endpoints related to heart failure. The resulting bounds are analogues to the statistical notions of Type I error and power, respectively. Heuristics are derived to help practitioners decide which endpoints to include, depending on the statistical power of these endpoints and on which combinations of true effects are of clinical interest. Overall, the framework proposed in this report can represent many of the policies used by practitioners when designing Phase II studies with multiple endpoints. The outcome of the simulations presented in this theses can guide the selection of endpoints in order to achieve the desired bounds on the probabilities of correct and incorrect decisions.Item Delayed-acceptance approximate Bayesian computation Markov chain Monte Carlo: faster simulation using a surrogate model(2020-01-09) Krogdal, Andrea; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThe thesis introduces an innovative way of decreasing the computational cost of approximate Bayesian computation (ABC) simulations when implemented via Markov chain Monte Carlo (MCMC). Bayesian inference has enjoyed incredible success since the beginning of 1990’s thanks to the re-discovery of MCMC procedures, and the availability of performing personal computers. ABC is today the most famous strategy to perform Bayesian inference when the likelihood function is analytically unavailable. However, ABC procedures can be computationally challenging to run, as they require frequent simulations from the data-generating model. In this thesis we consider learning a so-called "surrogate model", one that is cheaper to simulate from, compared to the assumed data-generating model, and in this manner save computational time. The strategy implemented is known in MCMC literature as "delayed acceptance MCMC", however to the best of our knowledge has not been previously adapted into an ABC framework. Simulation studies consider the approach on two different models, producing Gaussian data and g-and-k distributed data, respectively. For the most challenging example we observed that our approach, consisting in a delayed-acceptance ABC algorithm, led to a 20-folds acceleration in the MCMC sampling, compared to a standard ABC-MCMC algorithm.Item Design and analysis of pre-clinical experiments using a method combining multiple comparisons and modeling techniques for dose-response studies(2021-04-20) Singh, Avijit; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperAbstract. Identifying and estimating the dose-response relationship between a compound and a pharmacological endpoint of interest is one of the most important and difficult goals in the preclinical stage of pharmaceutical drug development. We conduct pharmacodynamic studies to investigate the dose-response profile and then different studies to find doses that lead to desired efficacy or acceptable safety in the endpoint(s). The aim of this thesis is to provide an overview of existing techniques and design of experiments which are appropriate for addressing the goals of these studies simultaneously. We have used a method combining multiple comparisons and modeling techniques (MCPMod) in designing the experiments and found that we can reduce the required total sample size by using an optimal design. We have analysed the simulated data using MCPMod and observed that this method can be used to identify the dose-response relationship and estimate dose at a required effect. We have compared the two approaches of estimating dose and discovered that using a weighted average of all fitted models gives a similar result as compared with using the best fitted model. Finally we have investigated the possibility of identifying the presence of toxicity in response of a few or many samples at higher doses and found that we can detect toxicity if there are many samples with toxic response at a higher dose. This combined strategy is both financially and ethically rewarding as it reduces the time and cost of study and also reduces the number of animals used in pre-clinical trials.Item Dilation theory and som of its applications(2025-05-19) Mehrabi, Mohammadhossein; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThe purpose of this thesis is to explain some results in dilation theory. In this regard, it has been given the principal results and ideas on positive maps and dilation theory such as Nagy’s theorem, von Neumann’s inequality and some different versions of it, Stinespring’s theorem, dilation for commuting and non-commuting sets of operators, and Shilov boundaries. Also, Nevanlinna-Pick’s interpolation and an application in control theory has been considered.Item Dilaton Theory and some of its Applications(2024-07-04) Mehrabi, Mohammadhossein; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperThe purpose of this thesis is to explain some results in dilation theory. In this regard, it has been given the principal results and ideas on positive maps and dilation theory such as Nagy’s theorem, von Neumann’s inequality and some different versions of it, Stinespring’s theorem, dilation for commuting and non-commuting sets of operators, and Shilov boundaries. Also, Nevanlinna-Pick’s interpolation and an application in control theory has been considered.Item Discretization of the Interior Neumann Problem using Lusin Wavelets(2025-05-19) Jonsson, Jakob; Timlin, Emil; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperWe prove the existence of Hilbert space frames for the complex Hardy subspaces of L2(T), consisting of simple rational functions whose poles are arranged according to a Whitney partition. We also present parts of the classical existence theory for the Dirichlet and Neumann problems based on layer potentials. We use our frame to construct new methods—the Casazza-Christensen method (CC method) and the Whitney method of fundamental solutions (WMFS)—for numerically solving Laplace’s equation with Neumann boundary data on the unit disk. Our goal is to resolve problems in computing the solution to high accuracy near the boundary, that is typical for the boundary integral equation method (BIE method). The methods are implemented in MATLAB and their performances are analyzed. Both methods converge exponentially when the exact solution is a polynomial, but when the exact solution is a rational function with a pole just outside of the boundary, the convergence is considerably slower. For polynomial data, the accuracy of the new methods near the boundary is much better than for a simple implementation of the BIE method. Some partial theoretical results related to the convergence and conditioning of the method of fundamental solutions and the WMFS are proved.Item Do viscous flows slip?(2023-12-21) Sjösvärd, Björn; University of Gothenburg/Department of Mathematical Science; Göteborgs universitet/Institutionen för matematiska vetenskaperIn this thesis, the Stokes equation is discussed and solved under different boundary conditions. The Stokes equation governs the flow of viscous liquids, for example honey or syrup. The first chapters in the thesis provides an introduction to multivector algebra and analysis, with the aim of presenting the concept of Hodge decompositions. With an application of this theory, the Stokes equation with the Hodge boundary conditions is solved using the finite element method. This is compared to the solution of the Stokes equation under the more standard no-slip condition. It is concluded that the Hodge boundary conditions are natural from a mathematical point of view, although they can not be used to model physical flows. In particular, they are contrary to the known physical fact that viscous flows tend to stick to the boundary. Moreover, it is showed that the Hodge boundary conditions can be interpreted in a way that the friction at the boundary of the domain is solely determined by the curvature.