Show simple item record

dc.contributor.advisorFreitas, Andre
dc.contributor.authorBarzegar, Siamak
dc.date.accessioned2019-02-13T14:23:54Z
dc.date.issued2019-02-13
dc.identifier.urihttp://hdl.handle.net/10379/14949
dc.description.abstractDistributional semantics is built upon the assumption that the context surrounding a given word in text provides important information about its meaning (Distributional hypothesis). A rephrasing of the distributional hypothesis states that words that occur in similar contexts tend to have a similar meaning. Distributional semantics focuses on the construction of a semantic representation of a word based on the statistical distribution of word co-occurrence in texts. Distributional Semantic Models (DSMs) represent co-occurrence patterns under a vector space representation. In recent years, word embedding/distributional semantic models have evolved to become a fundamental component in many natural language processing (NLP) architectures due to their ability of capturing and quantifying semantic associations at scale. Distributional Semantics have been applied for different tasks in NLP area such as finding similar or related phrase/words, the computation of semantic relatedness measures, semantic relation classification and so forth. Distributional semantic models are strongly dependent on the size and the quality of the reference corpora, which embeds the common-sense knowledge necessary to build comprehensive models. While high-quality texts containing large-scale common-sense information and domain-specific information are present in English, such as Wikipedia, other languages may lack sufficient textual support to build comprehensive distributional models. Distributional Semantic Models are also often limited to semantic similarity/relatedness between two entities/terms with no explicit relation type. Often, it is not possible to assign a direct semantic relation between entities. This thesis seeks to analyse transportability aspects (Language and Domain) as and explores both coarse & fine-grained semantics for direct and indirect relation classification using a unified architecture (Indra) for developing language and domain independent DSM models with advanced (compositional) relation classification capabilities.en_IE
dc.publisherNUI Galway
dc.subjectDistributional Semantic Modelen_IE
dc.subjectWord Embeddingen_IE
dc.subjectNatural Language Processingen_IE
dc.subjectDeep Learningen_IE
dc.subjectLightweight Machine Translationen_IE
dc.subjectTransportabilityen_IE
dc.subjectComposite Semantic Relation Classificationen_IE
dc.subjectEngineering and Informaticsen_IE
dc.titleA transportable distributional semantics architectureen_IE
dc.typeThesisen
dc.contributor.funderScience Foundation Irelanden_IE
dc.contributor.funderHorizon 2020en_IE
dc.description.embargo2019-12-12
dc.local.finalYesen_IE
dcterms.projectinfo:eu-repo/grantAgreement/EC/H2020::IA/645425/EU/Social Sentiment analysis financial IndeXes/SSIXen_IE
dcterms.projectinfo:eu-repo/grantAgreement/SFI/SFI Research Centres/12/RC/2289/IE/INSIGHT - Irelands Big Data and Analytics Research Centre/en_IE
nui.item.downloads0


Files in this item

Attribution-NonCommercial-NoDerivs 3.0 Ireland
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. Please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.

The following license files are associated with this item:

Thumbnail

This item appears in the following Collection(s)

Show simple item record