CLASP Papers in Computational Linguistics

Permanent URI for this collectionhttps://gupea-staging.ub.gu.se/handle/2077/54899

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Proceedings of the 2023 CLASP Conference on Learning with Small Data
    (The Association for Computational Linguistics, 2023-09-10) Breitholtz, Ellen; Lappin, Shalom; Loáiciga, Sharid; Ilinykh, Nikolai; Dobnik, Simon; Centre for Linguistic Theory and Studies in Probability (CLASP); Department of Philosophy, Linguistics and Theory of Science (FLoV); University of Gothenburg
    The purpose of our conference is to bring together researchers from several areas of NLP, addressing datasets, methods and limits of effective (machine) learning with small data containing natural language and associated multi-modal information. The conference covers areas such as machine learning, natural language processing, language technology, computational linguistics, theoretical linguistics, psycholinguistics, as well as artificial intelligence, cognitive science, ethics, and policy.
  • Item
    Proceedings of the 2022 CLASP Conference on (Dis)embodiment
    (The Association for Computational Linguistics, 2022-09-14) Dobnik, Simon; Grove, Julian; Sayeed, Asad; Department of Philosophy, Linguistics and Theory of Science (FLoV); Centre for Linguistic Theory and Studies in Probability (CLASP)
    Dis)embodiment brings together researchers from several areas examining the role of grounding and embodiment in modelling human language and behaviour – or limits thereof. The conference covers areas such as machine learning, computational linguistics, theoretical linguistics and philosophy, cognitive science and psycholinguistics, as well as artificial intelligence ethics and policy.
  • Item
    CLASP Papers in Computational Linguistics
    (Centre for Linguistic Theory and Studies in Probability (CLASP), 2020-02-25) Howes, Christine; Dobnik, Simon; Ellen, Breitholtz; Department of Philosophy, Linguistics and Theory of Science (FLOV); University of Gothenburg
    This volume showcases research which aims to bridge the gaps between research on dialogue and research on perception. Dialogue research investigates how natural language is used in interaction between interlocutors and how coordination and successful communication is achieved. However,this research often takes for granted that we align our perceptual representations - taken to be part of common ground (grounding in dialogue). They have also typically remained silent about how we integrate information from different sources and modalities. This is unsustainable when we consider interactions between agents with obviously different perceptual capabilities, such as dialogues between humans and avatars or robots. Contrarily, studies of perception have focussed on how an agent interacts with and interprets the information from their perceptual environment. There is significant research on how language is grounded in perception, and connected to perceptual representations and actions and therefore assigned meaning (grounding in action and perception). Recently, there has been progress on integrated computational approaches to language, action, and perception, especially with the introduction of deep learning methods in the field of image descriptions that use end-to-end training from data. However, these have a limited integration to the dynamics of dialogue and often fail to take into account the incremental and context sensitive nature of language and the environment.
  • Item
    CLASP Papers in Computational Linguistics
    (Centre for Linguistic Theory and Studies in Probability (CLASP), 2017-11) Simon, Dobnik; Shalom, Lappin; Department of Philosophy, Linguistics and Theory of Science (FLOV); University of Gothenburg
    The past two decades have seen impressive progress in a variety of areas of AI, particularly NLP, through the application of machine learning methods to a wide range of tasks. With the intensive use of deep learning methods in recent years this work has produced significant improvements in the coverage and accuracy of NLP systems in such domains as speech recognition, topic identification, semantic interpretation, and image description generation. While deep learning is opening up exciting new approaches to long standing, difficult problems in computational linguistics, it also raises important foundational questions. Specifically, we do not have a clear formal understanding of why multi-level recursive deep neural networks achieve the success in learning and classification that they are delivering. It is also not obvious whether they should displace more traditional, logically driven methods, or be combined with them. Finally, we need to explore the extent, if any, to which both logical models and machine learning methods offer insights into the cognitive foundations of natural language. The aim of the Conference on Logic and Machine Learning in Natural Language (LAML) was to initiate a dialogue between these two approaches, where they have traditionally remained separate and in competition.