Show simple item record

dc.contributor.authorBizzoni, Yuri
dc.date.accessioned2019-01-31T09:26:03Z
dc.date.available2019-01-31T09:26:03Z
dc.date.issued2019-01-31
dc.identifier.isbn978-91-7833-311-0
dc.identifier.urihttp://hdl.handle.net/2077/58277
dc.description.abstractMetaphor is one of the most prominent, and most studied, figures of speech. While it is considered an element of great interest in several branches of linguistics, such as semantics, pragmatics and stylistics, its automatic processing remains an open challenge. First of all, the semantic complexity of the concept of metaphor itself creates a range of theoretical complications. Secondly, the practical lack of large scale resources for machine learning approaches forces researchers to work under conditions of data scarcity. This compilation thesis provides a set of experiments to (i) automatically detect metaphors and (ii) assess a metaphor's aptness with respect to a given literal equivalent. The first task has already been tackled by a number of studies. We approach it as a way to assess the potentialities and limitations of our approach, before dealing with the second task. For metaphor detection we were able to use existing resources, while we created our own dataset to explore metaphor aptness assessment, which constitutes the most innovative part of this research. In all of the studies presented here, I have used a combination of word embeddings and neural networks. This combination appears particularly effective since pre-trained word embeddings can provide the networks with information necessary to deal with metaphors under conditions of data scarcity. To deal with metaphor aptness assessment, we frame the problem as a case of paraphrase identification. Given a sentence containing a metaphor, the task is to find the best literal paraphrase from a set of candidates. We build a dataset designed for this task, that allows a gradient scoring of various paraphrases with respect to a reference sentence, so that paraphrases are ordered according to their degree of aptness. Therefore, we can use it for both binary classification and ordering tasks. This dataset is annotated through crowd sourcing by an average of 20 annotators for each pair. We then design a deep neural network to be trained on this dataset. We show that its architecture is able achieve encouraging levels of performance, despite the serious limitations of data scarcity in which it is applied. In the final experiment of this compilation, more context is added to a sub-section of the dataset in order to study the effect of extended context on metaphor aptness rating. We show that extended context changes human perception of metaphor aptness and that this effect is reproduced by our neural classifier. The conclusion of the last study is that extended context compresses aptness scores towards the center of the scale, raising low ratings and decreasing high ratings given to paraphrase candidates outside of any context.sv
dc.language.isoengsv
dc.relation.haspartBizzoni, Yuri, Stergios Chatzikyriakidis, and Mehdi Ghanimifard. "" Deep" Learning: Detecting Metaphoricity in Adjective-Noun Pairs." Proceedings of the Workshop on Stylistic Variation. 2017. http://www.aclweb.org/anthology/W17-4906sv
dc.relation.haspartBizzoni, Yuri, MARCO SILVIO GIUSEPPE Senaldi, and Alessandro Lenci. "Finding the Neural Net: Deep-learning Idiom Type Identification from Distributional Vectors." (2018): 27-41. Submittedsv
dc.relation.haspartBizzoni, Yuri, and Mehdi Ghanimifard. "Bigrams and BiLSTMs Two neural networks for sequential metaphor detection." Proceedings of the Workshop on Figurative Language Processing. 2018. http://www.aclweb.org/anthology/W18-0911sv
dc.relation.haspartBizzoni, Yuri, and Shalom Lappin. "Deep Learning of Binary and Gradient Judgements for Semantic Paraphrase." IWCS 2017—12th International Conference on Computational Semantics—Short papers. 2017.http://www.aclweb.org/anthology/W17-6903sv
dc.relation.haspartBizzoni, Yuri, and Shalom Lappin. "Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks." Proceedings of the Workshop on Figurative Language Processing. 2018. http://www.aclweb.org/anthology/W18-0906sv
dc.relation.haspartBizzoni, Yuri, and Shalom Lappin. "The Effect of Context on Metaphor Paraphrase Aptness Judgments." arXiv preprint arXiv:1809.01060 (2018). https://arxiv.org/abs/1809.01060sv
dc.subjectmetaphor detectionsv
dc.subjectdeep neural networksv
dc.subjectdistributional semantic spacessv
dc.subjectmetaphor aptnesssv
dc.titleDetection and Aptness: A study in metaphor detection and aptness assessment through neural networks and distributional semantic spacessv
dc.typeText
dc.type.svepDoctoral thesiseng
dc.gup.mailyuri.bizzoni@gu.sesv
dc.type.degreeDoctor of Philosophysv
dc.gup.originGöteborgs universitet. Humanistiska fakultetenswe
dc.gup.originUniversity of Gothenburg. Faculty of Artseng
dc.gup.departmentDepartment of Philosophy, Linguistics and Theory of Science ; Institutionen för filosofi, lingvistik och vetenskapsteorisv
dc.gup.defenceplaceTorsdag den 21 februari 2019, kl. 14.00, T302, Olof Wijksgatan 6sv
dc.gup.defencedate2019-02-21
dc.gup.dissdb-fakultetHF


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record