dc.contributor.author | Nieto Piña, Luis | |
dc.date.accessioned | 2019-08-23T08:03:45Z | |
dc.date.available | 2019-08-23T08:03:45Z | |
dc.date.issued | 2019-08-23 | |
dc.identifier.isbn | 978-91-87850-75-2 | |
dc.identifier.issn | 0347-948X | |
dc.identifier.uri | http://hdl.handle.net/2077/60509 | |
dc.description.abstract | The representation of written language semantics is a central problem of language technology and a crucial component of many natural language processing applications, from part-of-speech tagging to text summarization. These representations of linguistic units, such as words or sentences, allow computer applications that work with language to process and manipulate the meaning of text. In particular, a family of models has been successfully developed based on automatically learning semantics from large collections of text and embedding them into a vector space, where semantic or lexical similarity is a function of geometric distance. Co-occurrence information of words in context is the main source of data used to learn these representations.
Such models have typically been applied to learning representations for word forms, which have been widely applied, and proven to be highly successful, as characterizations of semantics at the word level. However, a word-level approach to meaning representation implies that the different meanings, or senses, of any polysemic word share one single representation. This might be problematic when individual word senses are of interest and explicit access to their specific representations is required. For instance, in cases such as an application that needs to deal with word senses rather than word forms, or when a digital lexicon's sense inventory has to be mapped to a set of learned semantic representations.
In this thesis, we present a number of models that try to tackle this problem by automatically learning representations for word senses instead of for words. In particular, we try to achieve this by using two separate sources of information: corpora and lexica for the Swedish language. Throughout the five publications compiled in this thesis, we demonstrate that it is possible to generate word sense representations from these sources of data individually and in conjunction, and we observe that combining them yields superior results in terms of accuracy and sense inventory coverage. Furthermore, in our evaluation of the different representational models proposed here, we showcase the applicability of word sense representations both to downstream natural language processing applications and to the development of existing linguistic resources. | sv |
dc.language.iso | eng | sv |
dc.relation.ispartofseries | Data Linguistica | sv |
dc.relation.ispartofseries | 30 | sv |
dc.relation.haspart | Luis Nieto Piña and Richard Johansson 2015. A simple and efficient method to generate word sense representations. Proceedings of the International Conference Recent Advances in Natural Language Processing, 465–472. Hissar, Bulgaria. | sv |
dc.relation.haspart | Luis Nieto Piña and Richard Johansson 2016. Embedding senses for efficient graph-based word sense disambiguation. Proceedings of TextGraphs-10: the Workshop on Graph-based Methods for Natural Language Processing, NAACL-HLT 2016, 1–5. San Diego, USA. | sv |
dc.relation.haspart | Luis Nieto Piña and Richard Johansson 2017. Training word sense embeddings with lexicon-based regularization. Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Asian Federation of Natural Language Processing. Taipei, Taiwan. | sv |
dc.relation.haspart | Luis Nieto Piña and Richard Johansson 2018. Automatically Linking Lexical Resources with Word Sense Embedding Models. Proceedings of the third workshop on semantic deep learning (SemDeep-3), COLING 2018, 23–29. Association for Computational Linguistics. Santa Fe, USA. | sv |
dc.relation.haspart | Lars Borin, Luis Nieto Piña and Richard Johansson 2015. Here be dragons? The perils and promises of inter-resource lexical-semantic mapping. Proceedings of the workshop on semantic resources and semantic annotation for natural language processing and the digital humanities at NODALIDA 2015, 1–11. Vilnius, Lithuania. | sv |
dc.subject | language technology | sv |
dc.subject | natural language processing | sv |
dc.subject | distributional models | sv |
dc.subject | semantic representations | sv |
dc.subject | distributed representations | sv |
dc.subject | word senses | sv |
dc.subject | embeddings | sv |
dc.subject | word sense disambiguation | sv |
dc.subject | linguistic resources | sv |
dc.subject | corpus | sv |
dc.subject | lexicon | sv |
dc.subject | machine learning | sv |
dc.subject | neural networks | sv |
dc.title | Splitting rocks: Learning word sense representations from corpora and lexica | sv |
dc.type | Text | |
dc.type.svep | Doctoral thesis | eng |
dc.type.degree | Doctor of Philosophy | sv |
dc.gup.origin | Göteborgs universitet. Humanistiska fakulteten | swe |
dc.gup.origin | University of Gothenburg. Faculty of Arts | eng |
dc.gup.department | Department of Swedish ; Institutionen för svenska språket | sv |
dc.gup.defenceplace | Fredagen den 13 september 2019, kl. 13.15, Lilla hörsalen, Humanisten, Lundgrensgatan 1B | sv |
dc.gup.defencedate | 2019-09-13 | |
dc.gup.dissdb-fakultet | HF | |