Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Issue title: Meaning in Context: Ontologically and linguistically motivated representations of objects and events
Guest editors: Valerio Basile, Tommaso Caselli and Daniele P. Radicioni
Article type: Research Article
Authors: Hosseini, Hawre; * | Nguyen, Tam T. | Wu, Jimmy | Bagheri, Ebrahim
Affiliations: Laboratory for Systems, Software and Semantics (LS3), Ryerson University, Toronto, Canada. E-mails: hawre.hosseini@ryerson.ca, nthanhtam@gmail.com, jimmy1.wu@ryerson.ca, bagheri@ryerson.ca
Correspondence: [*] Corresponding author. E-mail: hawre.hosseini@ryerson.ca.
Note: [] Accepted by: Valerio Basile
Abstract: Within the context of Twitter analytics, the notion of implicit entity linking has recently been introduced to refer to the identification of a named entity, which is central to the topic of the tweet, but whose surface form is not present in the tweet itself. Compared to traditional forms of entity linking where the linking process revolves around an identified surface form of a potential entity, implicit entity linking relies on contextual clues to determine whether an implicit entity is present within a given tweet and if so, which entity is being referenced. The objective of this paper, while introducing and publicly sharing a comprehensive gold standard dataset for implicit entity linking, is to perform the task of implicit entity linking. The dataset consists of 7,870 tweets, which are classified as either containing implicit entities, explicit entities, both, or neither. The implicit entities are then linked to three levels of entities on Wikipedia, namely coarse-grained level, e.g., Person, Fine-grained level, e.g., Comedian, and the actual entity, e.g., Seinfeld. The proposed model in this work formulates the problem of implicit entity linking as an ad-hoc document retrieval process where the input query is the tweet, which needs to be implicitly linked and the document space is the set of textual descriptions of entities in the knowledge base. The novel contributions of our work include: 1) designing and collecting a gold standard dataset for the task of implicit entity linking; 2) defining the implicit entity linking process as an ad-hoc document retrieval task; and 3) proposing a neural embedding-based feature function that is interpolated with prior term dependency and entity-based feature functions to enhance implicit entity linking. We systematically compare our work with existing work in this area and show that our method is able to provide improvements on a number of retrieval measures.
Keywords: Implicit entity linking, Semantic retrieval, DBpedia, Knowledge graph
DOI: 10.3233/AO-190215
Journal: Applied Ontology, vol. 14, no. 4, pp. 451-477, 2019
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl