You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Towards semantically enriched embeddings for knowledge graph completion

Abstract

Embedding based Knowledge Graph (KG) completion has gained much attention over the past few years. Most of the current algorithms consider a KG as a multidirectional labeled graph and lack the ability to capture the semantics underlying the schematic information. This position paper revises the state of the art and discusses several variations of the existing algorithms for KG completion, which are discussed progressively based on the level of expressivity of the semantics utilized. The paper begins with analysing various KG completion algorithms considering only factual information such as transductive and inductive link prediction and entity type prediction algorithms. It then revises the algorithms utilizing Large Language Models as background knowledge. Afterwards, it discusses the algorithms progressively utilizing semantic information such as class hierarchy information within the KGs and semantics represented in different description logic axioms. The paper concludes with a critical reflection on the current state of work in the community, where we argue that the aspects of semantics, rigorous evaluation protocols, and bias against external sources have not been sufficiently addressed in the literature, which hampers a more thorough understanding of advantages and limitations of existing approaches. Lastly, we provide recommendations for future directions.

1.Introduction

Knowledge Graphs (KGs) have recently gained attention due to their applicability to diverse fields of research, ranging from knowledge management, representation, and reasoning to learning representations over KGs. KGs represent knowledge in the form of relations between entities structured as (head, relation, tail) triples, referred to as facts, along with schematic information in the form of ontologies. KGs have been used for various downstream tasks such as web search, recommender systems, and question answering.

KGs, however, suffer from incompleteness because of manual or automated generation. Manual generation leads to limited knowledge represented by the curator and contains curator bias [56], while automated generation may lead to erroneous or missing information. KG completion in particular includes the tasks of (i) triple classification, i.e., deciding if the triple is true or not, (ii) Link Prediction (LP) to complete the head, tail, or relation in a triple, and (iii) entity classification, also known as entity type prediction. To perform KG completion, various rule-based as well as embeddings based models have been proposed. These algorithms are computationally expensive and are transductive: they only perform predictions based on triple information involving known entities and relations. This is not readily usable when the inference has to be performed on unseen entities and relations. To this end, inductive KG completion allows for predicting triples that involve unseen entities and relations. These transductive and inductive LP algorithms are mostly based on factual information contained in KGs. Various studies leveraging language models as an external source of knowledge have been proposed for KG completion. These algorithms lag behind in terms of performance w.r.t. KG embedding based methods because of ranking-based metrics such as Hits@k since existing KG embedding-based algorithms operate under the Closed World Assumption. This led to the need for human evaluation since Large Language Models (LLMs) contain more general knowledge and may generate correct answers that are different from what is expected by the ground truth with the highest score. Apart from the factual information (i.e., Assertion Box, ABox), another source of information is the ontological information (i.e., Terminology Box, TBox) contained in the KG. The current methods almost completely ignore this TBox information. To rectify this situation various attempts have been made to include type hierarchies and ontological information with different expressivity levels such as EL++, ALC, etc. In some cases, additional representational capabilities are utilized to capture this information such as box embeddings.

Related work Several studies have surveyed the state of the art (SoTA) in KG completion. The work by Paulheim [59] provides a survey of the articles related to KG refinement including various classical and rule-based approaches for KG completion, yet KG embeddings are not discussed. Other surveys specifically focus on KG embedding-based methods for KG completion. Wang et al. [79] organises the algorithms for embedding-based KG completion according to their scoring functions such as translational models, semantic matching models, etc. However, this survey does not discuss the methods proposed for KG completion using multimodal information related to an entity or relation such as images, text, and numeric literals. These aspects are targeted in the survey by Gesese et al. [29], which categorizes these methods based on their scoring function (inspired by the work by Wang et al. [79]) as well as based on multiple modalities. The survey shows theoretical and experimental comparisons of the existing approaches. Still, none of the aforementioned works cover the aspects of ontological statements and semantics in KG embeddings. Recent works [57,58] discuss the interplay of KGs and LLMs. The study by S. Pan et al. [58] describes methods that integrate LLMs for KG embeddings and KG completion, yet the role of schematic information is not considered in the anlaysis. Complementary, J. Z. Pan et al. [57] describe how LLMs can be used to complete triples and perform different ontology engineering tasks (e.g., completion or refinement) in KGs. In contrast to these works, our work focuses on KG embeddings and how LLMs can be used as external sources to improve the performance of KG completion tasks. In summary, as compared to these existing studies, the current article targets the semantic aspects of knowledge graph embeddings by summarizing and discussing the approaches that have been proposed so far for leveraging semantics provided in the KG.

Contributions This position paper provides an overview of the evolution of methodologies proposed for KG completion, starting from the embedding-based algorithms and LLM-based approaches to the various categories of algorithms proposed for incorporating schematic information within KG embeddings for performing different kinds of completion tasks. It further discusses the limitations of the approaches in each of these categories and concludes with critical reflections and future research directions.

2.Preliminaries: Semantics in knowledge graphs

Commonly, KGs are defined as a set of head-relation-tail triples (h,r,t), where h and t are nodes in a graph, and r is a directed edge from h to t (e.g., [8,17,37,54]). In this way, a KG corresponds to a directed, labeled graph, where triples in the KG are labeled edges between labeled nodes, as captured in the following definition.

Definition 1

Definition 1(Knowledge Graph: Triple Set Definition).

A knowledge graph is a directed, labeled graph G=(V,E,l,LG), with V the set of nodes, E the set of edges, LG the set of labels, and a mapping l:VELG. A triple t=(h,r,t) is a labelled edge, i.e., (h,r,t)=(l(h),l(r),l(t)) where rE and h,tV.

Table 1

Axioms in EL++, ALC, and SH. Notation: ⊤ top, ⊥ bottom, a, b are instances, C, D are concepts, r, s are relations, trans denotes transitivity

{a}r(a,b)C(a)CDCDCD¬Crstrans(r)r1rns
EL++
ALC
SH

Definition 1 captures the graph structure of KGs, yet aspects concerning the meaning of nodes and edges in a KG are not explicitly defined. First, there is no distinction about the kind of nodes that can exist in a KG, i.e., whether nodes correspond to entities or text descriptions. This distinction exists in data models like the Resource Description Framework (RDF), where nodes can represent entities, labeled with IRIs or blank nodes, or literals that are used for values such as strings or different types of numbers like integers, float, etc. Distinguishing between entities and literals impacts both the structure and connectivity of nodes in the graph as well as the meaning of things in the graph. As we will see in Section 3.1, specialised KG embedding models are required to handle both kinds of nodes and different types of literals in a KG. Second, the semantics (formal meaning) of classes or relations in the KG are not provided in Definition 1. This is typically the role of ontologies in KGs, which specify the definitions for classes (or concepts) and relations (or roles) using labels or symbols coming from logic. Different logical languages introduce different levels of expressivity (cf. Table 1). Expressivity here refers to the allowed complexity of statements that go beyond simple assertions about individuals (e.g., class assertions) and their relations to other individuals or literals, that are comprised in the ABox. More expressive statements included in the TBox comprise class and role definitions. The constructs and axioms defined in a logic language L can be transformed into labels and triples to be encoded in a KG. Based on this, we provide a definition for KGs that takes into account the semantics of statements.

Definition 2

Definition 2(Knowledge Graph: Semantic Definition).

Let L be a logic language that defines the semantics of concepts and roles, and G=(V,E,l,LG) a knowledge graph (KG) as of Definition 1. G is an L-KG if the labels LG contain all the symbols defined in L, and the triples in G correspond to statements that can be expressed in L.

Typical logic languages used in KGs are in the family of Description Logics, due to their convenient trade-off between expressivity and scalability [65]. For example, the existential language EL defines the notions of concept intersection and full existential quantification. The extension EL++ introduces the additional notions of concept intersection and concept subsumption; the latter is necessary to model class hierarchies in KGs. ALC provides additional expressivity w.r.t. EL++ as it includes concept union, negation, and universal quantification. Higher levels of expressivity are captured in KGs modeled with formalisms defined for the semantic web like the Web Ontology Language (OWL). OWL is based on the SH language, which includes more complex definitions for roles, including role subsumption and transitivity. Another important aspect of KG semantics is the concept of data types captured in the (D) extension. (D) allows for modeling the meaning of literals in KGs which can be part of statements in the ABox. It is important to note that KGs based on OWL can achieve different levels of expressivity beyond the ones described in this section, e.g., OWL-Lite is based on SHIF(D), OWL-DL on SHOIN(D), and OWL2 on SROIQ(D), the latter providing definitions for role reflexivity, irreflexivity, and disjointness, inverse properties, enumerated classes, and qualified cardinality restrictions.

3.Knowledge graph completion using embeddings

The vast majority of embedding based algorithms for KG completion treat KGs as graph structures as presented in Definition 1. Most of these algorithms perform transductive link prediction (LP) [13] where the inference is performed on the same graph on which the model is trained. On the contrary, inductive LP is performed over the unseen graph, i.e., (parts of) the graph are not seen during training. This section gives an overview of KG embedding algorithms for transductive LP (Section 3.1), inductive LP algorithms (Section 3.2), and entity type prediction (Section 3.3) since the previous two categories mostly focus on head, tail, relation, and triple prediction.

3.1.Transductive link prediction

A large variety of KG embeddings have been proposed for the LP task, which differ in their scoring function (which measures the plausibility of a triple in a KG) and their underlying learning methodology, such as translational models, semantic matching models, neural network models, path-based methods, and models based on multimodal KG – e.g., making use of textual, numeric, and multi-modal literals. In the following, a brief overview of the models performing transductive LP is given. A detailed description of these models is provided by Dai et al. [13].

Translation based models include TransE, TransH, etc. In TransE [8], the relation in a triple is considered a translation operation between the head and tail entities on a low dimensional space. TransH [82] extends TransE by projecting the entity vectors to relation-specific hyperplanes which helps in capturing different roles of an entity w.r.t. different relations. Both models have obvious limitations, such as the inability to represent symmetric or transitive relations. The scoring function of RotatE [71] models the relation as a rotation in a complex plane to preserve the symmetric/anti-symmetric, inverse, and composition relations in a KG.

Semantic matching models are based on a similarity-based scoring function that measures the plausibility of a triple by matching the semantics of the latent representations of entities and relations. In DistMult [96], each entity is mapped to a d-dimensional dense vector, and each relation is mapped to a diagonal matrix. The score of a triple is computed as the matrix multiplication between the entity vectors and the relation matrix. RESCAL [54] models the triple into a three-way tensor. The model explains triples via pairwise interaction of latent features. The score of the triple is calculated using the weighted sum of all the pairwise interactions between the latent features of the head and the tail entities. ComplEx [76] extends DistMult by introducing a Hermitian dot product for better handling asymmetric relations.

Neural network models represent an entity using the average of the word embeddings in the entity name. ConvE [17] uses 2D convolutional layers to learn the embeddings of the entities and relations in which the head entity and the relation embeddings are reshaped and concatenated which serves as an input to the convolutional layer. The resulting feature map tensor is then vectorized and projected into a k-dimensional space and matched with the tail embeddings using the logistic sigmoid function minimizing the cross-entropy loss. In ConvKB [53], each triple is represented as a 3-column matrix which is then fed to a convolution layer. Multiple filters are applied to the matrix in the convolutional layer to generate different feature maps. Next, these feature maps are concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score which is used to predict whether the triple is valid. Relational Graph Convolutional Networks (R-GCN) [68] extends Graph Convolutional Networks (GCN) to distinguish different relationships between entities in a KG. In R-GCN, different edge types use different projections, and only edges of the same relation type are associated with the same projection weight.

Path-based models such as PTransE [43] extend TransE by introducing a path-based translation model. GAKE [23] considers the contextual information in the graph by considering the path information starting from an entity. RDF2Vec [64] uses random walks to consider the graph structure and then applies the word embedding model on the paths to learn the embeddings of the entities and the relations. However, the prediction of head or tail entities with RDF2Vec is non-trivial because it is based on a language modeling approach.

Multimodal KG embeddings make use of different kinds of literals such as numeric, text, or images (see [29] for detailed analysis). This group of algorithms considers aspects of the (D) extension of logic languages (cf. Section 2). For example, DKRL [86] extends TransE by incorporating the textual entity descriptions encoded using a continuous bag-of-words approach. Jointly (ALSTM) [93] extends the DKRL model with a gating strategy and uses attentive LSTM to encode the textual entity descriptions. MADLINK [6] uses SBERT for representing entity descriptions and the structured representation is learned by performing random walks where at each step the relations to be crawled are ranked using “predicate frequency – inverse triple frequency” (pf-itf).

3.2.Inductive link prediction

Simply adapting most of the existing transductive LP models for inductive settings requires expensive re-training for learning embeddings for unseen entities leading to their inapplicability to perform predictions with unseen entities. To overcome this, inductive LP approaches were introduced which are discussed in the following.

Statistical rule-mining approaches make use of logical formulas to learn patterns from KGs. Systems such as AnyBURL [47,48] generalise random walks over the graph into Horn Clause rules which are then used for link-prediction: if the body of the rule matches with a path in the graph, the rule predicts that the triple in the conclusion of the rule should also be in the graph. NeuralLP [97] was proposed which learns first-order logical rules in an end-to-end differentiable model. DRUM [67] is another method that applies a differentiable approach for mining first-order logical rules from KGs and provides an improvement over NeuralLP.

Embedding based methods have also been proposed to work in an inductive setting. GraphSAGE [32] performs inductive LP by training entity encoders through feed-forward and graph neural networks. However, in GraphSAGE, the set of attributes (e.g., bag-of-words) are fixed before learning entity representations, restricting their application to downstream tasks [16]. One way to learn entity embeddings is to use graph neural networks for aggregating the neighborhood information [31,78]. However, these methods require unseen entities to be surrounded by known entities and fail to handle entirely new graphs [73] (i.e. they work only in a semi-inductive setting). KEPLER [80] is a unified model for knowledge embedding and pre-trained language representation by encoding textual entity descriptions with a pre-trained language model (LM) as their embeddings, and then jointly optimizing the KG embeddings and LM objectives. However, KEPLER is computationally expensive due to the additional LM objective and requires more training data. Inspired by DKRL, BLP [15] uses a pre-trained LM for learning representations of entities via an LP objective. QBLP [1] is a model proposed to extend BLP for hyper-relational KGs by exploiting the semantics present in qualifiers. Previously discussed models only consider unseen entities and not unseen relations. RAILD [30] on the other hand generates a relation-to-relation network for efficiently learning the relation features leading to solving the task of fully-inductive LP for unseen relations. RMPI [26] is another method that performs full-inductive link prediction by using relational message passing whereas the traditional methods perform message passing over the entities. A detailed survey on inductive LP algorithms is provided by Hubert et al. [35].

3.3.Entity type prediction

SDType [60] is a statistical heuristic model that exploits links between instances using weighted voting and assumes that certain relations occur only with particular types. It does not perform well if two or more classes share the same sets of properties and also if specific relations are missing for the entities. Many machine learning including neural network based models have been proposed for type prediction. Cat2Type [7] takes into account the semantics of the textual information in Wikipedia categories using language models such as BERT. In order to consider the structural information of Wikipedia categories, a category-to-category network is generated which is then fed to Node2Vec for obtaining the category embeddings. The embeddings of both structural and textual information are combined to classify entities into their types. The approach by Biswas et al. [5] leverages different word embedding models, trained on triples, together with a classification model to predict the entity types. Therefore, contextual information is not captured. Scalable Local Classifier per Node (SLCN) is used by the model in [49] for type prediction based on a set of incoming and outgoing relations. However, entities with only a few relations are likely to be misclassified. FIGMENT [94] uses a global model and a context model. The global model predicts entity types based on the entity mentions from the corpus and the entity names. The context model calculates a score for each context of an entity and assigns it to a type. Therefore, FIGMENT requires a large annotated corpus which is a drawback of the method. In APE [38], a partially labeled attribute entity-to-entity network is constructed containing structural, attribute, and type information for entities, followed by deep neural networks to learn the entity embeddings. MRGCN [84] is a multi-modal message-passing network that learns end-to-end from the structure of KGs as well as from multimodal node features. In HMGCN [39], the authors propose a GCN-based model to predict the entity types considering the relations, textual entity descriptions, and Wikipedia categories. ConnectE [101] and AttET [103] models find a correlation between neighborhood entities to predict the missing types. Ridle [83] learns entity embeddings and latent distribution of the relations using Restricted Boltzmann Machines allowing to capture semantically related entities based on their relations. This model is tailored to KGs where entities from different classes are described with different relations. CUTE [92] performs hierarchical classification for cross-lingual entity typing by exploiting category, property, and property-value pairs. MuLR [95] learns multi-level representations of entities via character, word, and entity embeddings followed by the hierarchical multi-label classification.

Discussion The algorithms discussed in Section 3 focus on encoding the factual information and structural relationships among entities in the KG. In other words, these approaches treat KGs as a set of triples and consider only statements in the ABox for generating KG embeddings and performing LP. Despite that some embeddings are computed not only triple-level information but also contextual entity information based on their neighboring relations and nodes (by implementing graph walks), all these approaches do not consider other forms of knowledge, particularly those found in LLMs or in expressive axioms of logical languages that form the backbone of knowledge graph semantics. For this reason, despite the great advances in reasoning achieved by KG embeddings, they still lack a deeper semantic understanding of entities, relationships, and classes in KGs and the more complex and logical constraints that underlie them. The subsequent sections focus on these aspects of LP.

4.Large language models for knowledge graph completion

LLMs can further be categorized into encoder-decoder models, encoder-only, and decoder-only models. The encoder-decoder models and encoder-only models, such as BERT [19], RoBERTa [44], etc., are masked language models which are discriminative models and are pretrained for predicting a masked word. These models have achieved SoTA performance on the NLP tasks such as entity recognition, sentiment analysis, etc. On the other hand, decoder-only models, such as LLaMa [75], ChatGPT [3], and GPT-4 [10], are autoregressive models which are generative and are pretrained on the task of predicting the next word. The rest of the subsection focuses on how these two categories of models have been utilized in the context of KG completion.

One of the pioneers of LLM-based approaches for KG completion is KG Bidirectional Encoder Representations from Transformer (KG-BERT) [98] which is fine-tuned on the task of KG completion and represents the triple as textual sequences. It takes the entity-relation-entity sequence and computes the scoring function using KG-BERT. The model represents entities and relations by their names or descriptions and takes the name/description word sequences as the input sentence of the BERT model for fine-tuning. Despite being an LLM-based approach, KG-BERT does not outperform models considering the structural information of a KG in terms of the ranking-based metrics, i.e., hits@k. Kim et al. [40] associates the shortcomings of KG-BERT with the fact that KG-BERT does not handle relations properly and it has difficulties in choosing the correct answer in the presence of lexically similar candidates. Therefore, Kim et al. propose a multitask learning based model [40] that combines the task of relation prediction and relevance ranking with LP to lead to better learning of the relational information.

The previously described methods still suffer from high overheads because of costly scoring functions and a lack of structured knowledge of the textual encoders. A Structured Augmented text Representation (StAR) model [77] was proposed where each triple is partitioned into two asymmetric parts, similar to a translation-based graph embedding approach. Both parts were encoded into contextualized representations with the help of a Siamese-style textual encoder. Yet, KG completion methods based on pretrained language models (PLM) lag in performance w.r.t. structure-based algorithms. Lv et al. [45] highlight that the reason underlying this lag is the evaluation setting, which is currently based on a Closed World Assumption (CWA), where an absent fact in the KG is considered false. Under the CWA, the performance of an LP algorithm is measured by its capability to predict a set of links that were removed from the KG. In contrast, LLMs introduce external knowledge which may lead to the prediction of new links that are semantically correct, but which were not in the original KG (and therefore not in the evaluation set) and, hence, not counted in the success metric. Additionally, the pretrained language models (PLMs) are utilized in an inappropriate manner, i.e., when triples are used as sentences it leads to incoherence in the generated sentences. Therefore, Lv et al. [45] present a PLM-based approach dubbed PKGC. This work targets the first problem by proposing manual annotation as an alternative. However, a medium-sized dataset containing 10,000 entities and 10,000 triples in the test set will lead to true labels of at most 200 million triples precluding human annotation. This observation leads to a new evaluation metric called CR@1, where the triples are sampled from the test set and the missing entities are filled with the top-1 predicted entity. Manual annotation is then performed to measure the correct ratio of these triples. PKGC then addresses the second problem by converting each triple and its support information (i.e., the definition and the attribute information) into natural prompt sentences, which are fed to the PLM. PKGC outperforms structural and LLM-based methods in various modalities, i.e., with the attribute and definition.

GenKGC [88] converts the KG completion task to a sequence-to-sequence (Seq2Seq) generation task, where the input to the model is head,relation,?. Following the “in-context learning” paradigm of GPT-3, the authors select some samples consisting of several triples with the same relation as in the input from the training set which is termed a relation-guided demonstration. It introduces an entity-aware hierarchical decoder during generation for better representation learning and reduced time complexity. On traditional datasets such as WN18RR and FB15k-237, GenKGC underperforms as compared to structural SotA models on hits@k (confirming the hypothesis made By Lv et al. [45]) and outperforms masked language model based approaches demonstrating the capabilities of generative models for KG completion. The Seq2Seq Generative Framework KG-Completion algorithm (KG-S2S) [11] takes into account the aspect of emerging KGs. Given a KG query, KG-S2S directly generates the target entity text by fine-tuning the pre-trained language model. KG-S2S learns the input representations of the entities and relations using entity descriptions, soft prompts, and Seq2Seq dropout. The KG elements, i.e., entity, relations, and timestamp are considered flat text, enabling the model to generate novel entities for KGs. The method is further evaluated in static, few-shot, and temporal settings.

Discussion LLMs have revolutionized the natural language processing (NLP) landscape by their ability to generate coherent and relevant text. However, they are not without limitations. One of the major concerns with LLMs is their tendency to hallucinate [69], which means they may generate coherent yet entirely fictional information or associations. This poses a significant challenge in knowledge graph completion tasks, where the goal is to accurately predict missing links or facts.

Moreover, LLMs typically require extensive fine-tuning and adaptation to specific domains to perform effectively. This is prevalent in specialized domains such as biomedical, legal, or scientific fields, where terms and context can vary significantly from general language use. The LLM performance is also lagging in torso and tail entities, where the LLM has limited or sparse knowledge about certain entities during learning [70]. Therefore, while LLMs show promise in improving the quality of KG completion, they need to be augmented with additional information to cope with knowledge scarcity or with domain-specific resources to provide accurate and reliable results in such domains.

Regarding the evaluation of KG completion methods, it is important to recognize that existing benchmarking methodologies often rely on the Closed World Assumption (CWA). However, CWA-based evaluations may not be adequate for assessing methods that integrate external knowledge into the prediction process, such as LLMs. This is because such approaches could produce accurate predictions that may not be represented in the KG itself, leading to spurious false positives under the CWA. Thus, there is a growing need for novel experimental configurations and evaluation metrics that can accommodate more complex and nuanced assessments, such as considering the confidence or uncertainty levels of predictions or incorporating additional external knowledge sources to provide context.

In conclusion, while LLMs present a valuable resource for knowledge graph completion tasks, their inherent limitations, particularly in terms of hallucination and domain-specific adaptation, need to be addressed. Additionally, the development of more sophisticated evaluation methodologies that go beyond the CWA will be crucial to ensure a reliable understanding of the LLM capabilities for KG completion in various domains.

5.Towards capturing semantics in knowledge graph embeddings

5.1.Capturing type information for knowledge graph completion

Recent initiatives have been taken for leveraging the schematic information in the form of type information about entities, both with and without considering the type hierarchies. TKRL [87] considers the hierarchical information of the entity types by using hierarchical type encoders. It is based on the assumption that each entity should have multiple representations for its different (super)types. TransT [46] also proposes an approach that considers the entity type and its hierarchical structure. It goes one step further and constructs relation types from entity types and captures the prior distribution of the entity types by computing the type-based semantic similarity of the related entities and relations. Based on the prior distribution, multiple embedding representations of each entity (set of semantic vectors instead of a single vector) are generated in a different context and then the posterior probability of the entity and the relation prediction is estimated. Zhang et al. [99] consider entity types as a constraint on the set of all the entities and let these type constraints induce an isomorphic collection of subsets in the embedding space. The framework introduces additional cost functions to model the fitness between these constraints and the entity and relation embeddings. JOIE [33] employs cross-view and intra-view modeling, such that (i) the cross-view association jointly learns the embeddings of the ontological concepts and the instance entities, and (ii) the intra-view models learn the structured knowledge related to entities as well as the ontological information (the hierarchy-aware encoding) separately. The model is evaluated on the task of triple completion and entity typing. Another proposed method that learns both entity, relation, and entity type embeddings from entity-specific triples and type-specific triples is Automated Entity Type Representation for KG Embedding (AutoETER) for LP tasks [55]. It considers the relation as a translation between the types of the head and the tail entity with the help of the relation-aware projection mechanism. Type-Aware Graph Attention neTworks for reasoning over KGs (TAGAT) [81] is one of the methods that combine the entity type information with the neighborhood information of the types while generating embeddings with the help of Graph ATtention networks (GAT). The relation level attention aims at distinguishing the importance of each associated relation for the entity. The neighborhood information of each type is also considered with the help of type-level attention since each relation may connect different entity groups even if the head entity belongs to the same group. Entity-level attention aims at determining how important each neighboring entity is for an entity under a given relation. TrustE [100] is yet another method that aims at building entity-typed structured embeddings with tuple trustworthiness by taking into account possible noise in entity types which is usually ignored by the current methods. TrustE encodes the entities and entity types into separate spaces with a structural projection matrix. The trustworthiness is ensured by detecting the noisy entity types where the energy function focuses more on the pairs of an entity and its type with high trustworthiness. The model is evaluated by detecting entity-type noise as well as entity-type prediction.

5.2.Ontology-enriched knowledge graph embeddings

Even though the previously discussed approaches use some of the schematic information, such as the types of entities and the type hierarchy, they still ignore much of the concept level knowledge captured in ontologies, i.e., TBox information in terms of description logic axioms. As reported by Chen et al. [12], KGE that interpret TBox triples as ABox triples (i.e., ignoring their semantics) do not achieve high performance in reasoning tasks such as membership and subsumption prediction. Therefore, in this section, we analyse the KGEs that do take TBox information into account, and their expressivity in regards to the constructs and axioms of logic languages discussed in Section 2. Table 2 presents an overview of these approaches.

A first generation of these systems represented concepts as (high-dimensional) spheres in the embedding space (e.g. [42]). However, while the intersection of concepts is a common operation, the intersection of two n-balls is not an n-ball, leading to challenges when measuring the distance between concepts and inferring equivalence between concepts. A second generation instead represents concepts as high-dimensional boxes, since boxes are closed under intersection. ELEm [42] is one of the first of these approaches and generates low dimensional vector spaces from EL++ by approximating the interpretation function by extending TransE with the semantics of conjunction, existential quantification, and the bottom concept. It is evaluated for LP based on protein-protein interaction. EMEL++ [50] evaluates the algorithm on the task of subsumption reasoning and compares it to the ELEm where these semantics are represented but not properly evaluated. Similar to TransE, EMEL++ interprets relations as the translations operating between the classes. BoxEL [89] and ELBE [61] extend ELEm by representing concepts as axis parallel boxes with two vectors for either the lower and upper corners or the center and offset. In BoxEL [89], the authors show the aforementioned advantage of box embeddings over ball embeddings with the help of an example related to conjunction operator, i.e., the ball embeddings cannot express ParentMaleFather as properly as compared to the box embeddings. Moreover, the translations cannot model the relation isChildOf between a Person and a Parent when they have two different volumes. In addition to the box representation, ELBE [61] defines several loss functions for each of the normal forms representing the axioms expressed in EL++ (shown in Table 2) such as conjunction, the bottom concept, etc. Taking a further step, Box2EL [36] learns representations of not only the concepts but also the roles as boxes for preserving as much semantics of the ontology as possible. It uses a similar mechanism to BoxEL for the representation of the concepts. The previous methods define roles (binary relations) as translations as in TransE but Box2EL associates every role with a head box and a tail box so that every point in the head box is related to every point in the tail box with the help of bump vectors. Bump vectors model interaction between the concepts and dynamically move the position of the embeddings of related concepts. CatE [102] on the other hand embeds ALC ontologies with the help of category theoretical semantics, i.e., the semantics of logic languages that formalizes interpretations using categories instead of sets. This is advantageous because categories have a graph-like structure. TransOWL [14] and its extensions allow for the inclusion of OWL axioms into the embedding process by modifying the loss function of TransE so that it gives a higher weight to triples that involve OWL vocabulary such as owl:inverseOf, owl:equivalentClass, etc. OWL2Vec* [12] uses a word embedding model to create embeddings from entities and words from the generated corpus. This corpus is generated from random walks over the ontology. The method is evaluated on the task of class membership prediction and class subsumption prediction. OntoZSL [25] is another approach that takes into account the ontological schema by considering the structural information as well as the textual information of the classes (via rdfs:subClassOf) and predicates (via rdfs:subPropertyOf, rdfs:domain, and rdfs:range) in the ontology for KG completion.

Table 2

Comparison of the expressivity of semantically enriched embeddings. Notation: a, b are instances, C, D, E are concepts, r, r1, r2, s are relations. The symbol ✓ (resp. (✓) indicates that it has been demonstrated by construction or empirically that the approach supports (resp. partially) the expression; otherwise, ✗ is indicated

ExpressionsELEmEMEL++BoxELELBEBox2ELCatEOWL2Vec*TransOWLOntoZSL
Constructors
{a}(✓)
C(a)
r(a,b)
EL++CD
CDE(✓)(✓)
r.CD
Cr.D
CD
r.C(✓)(✓)(✓)
C
ALCCD
r.D
¬C
Cr.D
r.CD
SHrs
r1r2s

Discussion This section discussed the methods leveraging schematic information contained in the KGs starting from type hierarchy to the semantics of classes and predicates in the description logic axioms. The approaches utilizing type hierarchies for LP are, however, in most cases evaluated on the traditional triple and entity-type prediction tasks but completely ignore tasks related to schema completion involving terms only from the TBox in the KG. Furthermore, the limited expressivity of these models makes them unsuitable for the more complex tasks of deductive reasoning. These shortcomings are addressed by the approaches representing the semantics underlying different description logic axioms with the help of box embeddings. Yet, these approaches are limited to certain kinds of axioms and do not cover more expressive statements like role transitivity. This is evidenced in Table 2 where only a few approaches can handle certain aspects of SH, yet the expressivity of OWL ontologies (from SHIF(D) to SROIQ(D)) is far to be reached. Lastly, the approaches discussed in this section still lack unified evaluation w.r.t. benchmark datasets leading to unclear performance comparison of the existing algorithms and of the impact of adding such schematic information for training the models.

6.Existing evaluation settings for knowledge graph completion using embeddings

This section provides an overview of the evaluation protocol including the benchmark datasets as well as the used evaluation metrics. Table 3 shows an overview of the evaluation tasks based on the categories of algorithms introduced in the rest of the paper along with the benchmark datasets.

Benchmark datasets The benchmark datasets vary from task to task, depending on the requirements of the algorithms. Most datasets are based on WordNet, YAGO, and Freebase which utilize only the triple information and do not incorporate the ontology by default. A detailed comparison of the datasets for LP in a transductive setting is provided by Gesese et al. [27]. The datasets used for link prediction (LP) typically provide training/validation sets; still, these splits are computed arbitrarily. Therefore, a fine-grained analysis of how the methods perform on different relations or classes is not always possible with these datasets. ELEm has been evaluated on the Protein-Protein Interaction (PPI) dataset for the LP task. However, the successor algorithms introduced datasets that consider more expressive description logic axioms such as Gene Ontology (GO) [41] or FoodOn [21]. In addition, some works [50,90] for ontology embedding models have further classified the test sets into normal forms (NF) to evaluate the models on subsumption prediction between atomic concepts (NF1), atomic concepts and conjunctions (NF2), and atomic concepts and existential restrictions (NF3 and NF4).

Table 3

Overview of the tasks and the datasets used for evaluation for each of the categories

CategoryTasksDatasets
Transductive Link PredictionLink PredictionFB15K [9], FB15K-237 [74], WN18 [9], WN18RR [18], YAGO3-10 [18], LiterallyWikidata [28]
Entity Classification, Triple ClassificationFB15K [9], FB15K-237 [74], WN18 [9], WN18RR [18], YAGO3-10 [18]
Inductive Link PredictionLink PredictionWikidata5M [80], Wikidata68K [30], ILPC 2022 [24]
LLM-Based MethodsLink PredictionFB15K237, WN18RR, FB15K-237N [45], FB15K-237NH [45], Wiki27K [45], OpenBG500 [91], Nell-One [91]
Methods using Type-HierarchyLink PredictionWN18RR, WN18, FB15k, FB15KET, YAGO43KET [51]
Entity Type PredictionWN18RR, WN18, FB15k, FB15KET, YAGO43KET [51], FB15K+ [86], FB15K*, JF17K [99], NELL-995 [81], YAGO26K-906 [85], DB111K-174 [85]
Methods using Description Logic AxiomsInductive Reasoning (Link Prediction, Subsumption, Class Membership)NELL-ZS [63], Wikidata-ZS [63], GO [41], FoodOn [21]
Deductive Reasoning (Unsatisfiability of the named concept, predicting axioms in deductive closure of an ontology)FoodOn [21], HeLIS [22], GO [41], SNOMED-CT [20], GALEN [62], Anatomy [52], PPI [72], FOBI [102], NRO [102]

Metrics For the task of LP, evaluation protocols typically involve rank-based metrics. Most of the algorithms use hits@k where k varies from 1 to 100, and typically values are 1, 3, 10, 50. Other ranking metrics are Mean Rank (MR) and Mean Reciprocal Rank (MRR). Besides rank-based metrics, for the task of entity classification or entity type prediction, some works (e.g., [83]) report on metrics used for information retrieval, i.e., precision, recall, and their corresponding aggregates such as F1 scores. In addition, works on ontology embedding models [42,90,102] also report on the area under the curve (AUC) of the ROC curve.

Evaluation and reported results Most of the algorithms are evaluated on the task of LP since they consider only the triple structure in the KG. Exceptions to this are the tasks that are used to evaluate the algorithms, incorporating description logic axioms. In the case of transductive LP, studies have shown that the efficiency of these algorithms greatly depends on the choice of hyperparameters. For example, Ruffinelli et al. [66] show that RESCAL, one of the earliest models, outperforms all the later approaches by training the models with proper hyperparameters. A similar study by Ali et al. [2] performs a large-scale evaluation of 21 KG embedding models using PyKeen11 and provides the best set of hyper-parameters. The results of the study show that the performance of the models is not only determined by their architecture. The training approach, loss function, and the explicit modeling of inverse relations are also crucial. This leads to the conclusion that a unified benchmarking is needed which allows for comparing the approaches under the same conditions. This will reveal interesting insights, e.g., some models considered not competitive anymore can still outperform more sophisticated approaches under certain configurations. While the evaluation setting for transductive LP algorithms has been well established, the other three categories still lack a unified evaluation setting. Lastly, most of the evaluations present reports on the models’ performance over the entire test set. These analyses fail to provide a detailed understanding of the limitations of the embeddings on different aspects of the KG, e.g., tail elements (entities, classes, or relations) or relation types (e.g., transitive, symmetric, etc.). Few studies have addressed this issue and present specific results, e.g., per normal form in the ontology [36], or aggregate metrics like F1-macro [83] that take into account the weights of classes. Still, there is a notable discrepancy in the literature regarding the manner datasets, benchmarks, and analyses are reported.

7.Conclusion and recommendations

In this section, we take a critical perspective on the SoTA over the past decade, based on the overview given in the preceding sections. We end with recommendations for fruitful future directions of work.

7.1.Critical reflection

Ignoring semantics The majority of KG completion algorithms only consider the ABox (Definition 1). This effectively reduces a KG to a “data graph”, i.e., relations between entities. Only recent algorithms have appeared that treat KGs as objects with computer-interpretable semantics (Definition 2), such as those discussed in Section 5.1 (to a limited extent) and Section 5.2 (to a larger extent).

Limitations of the transductive setting The majority of the work on KG completion follows a transductive setting, i.e., not allowing for the completion of a KG with new entities or relations. Only a few algorithms set in an inductive setting consider only the prediction of new entities, with almost no algorithms aiming to predict new relations. Limiting algorithms to the transductive setting severely restricts the downstream tasks to which these algorithms can be applied. For example, missing relations between known proteins and drugs can be predicted (protein-drug interaction), but transductive algorithms cannot be used to predict new drugs from the knowledge of known proteins.

Evaluation settings The community working on KG completion is severely hampered by a lack of standardised evaluation settings. There are multiple concerns in this area:

  • No standard protocols for hyper-parameter sweeping, leading to incomparable and unreproducible results. This led to the rather embarrassing result that a paper from 2020 [66] threw into doubt the progress of the entire field over a decade since the early work on RESCAL in 2011 [54].

  • Evaluation datasets with serious limitations. It took years before the widely used FB15K dataset was discovered to suffer from serious data leakage from the training to testing and validation splits because many triples were simply inverses of other triples, allowing trivial inverse-triple prediction algorithms to already gain a high score [74]. Furthermore, the small number of datasets used in the literature (Table 3) carries the risk that methods will optimise for the benchmark, instead of optimise for the task.

  • Dataset size. Datasets used in evaluations are typically small (e.g. on the order of 105 triples for the widely used FB15k-237 and YAGO3-10) in comparison to the KGs that are currently in routine practical use (often on the order of 108 triples or beyond).

  • Evaluation metric. The most widely deployed evaluation metric is hits@k on a held-out set of triples. This method favours reconstructing known links (that were already present in the original KG, but held out) over the prediction of genuinely new links that are semantically correct but did not appear in the original KG. A method may produce many genuinely new links that rank higher than a predicted held-out link, and such a high-value method would end up with a low hits@k score. Ironically, it is these methods which are likely to be of most value in downstream tasks. The recently proposed metric sem@k [34] aims to rectify this to some extent, although it remains unclear at the moment how much of this evaluation can be done without expensive human annotation of gold standards.

Bias against external knowledge Almost all current KG completion methods aim to predict links in a KG based on the properties of the KG itself, and this even holds true for most of the inductive methods that go beyond a transductive setting. On the other hand, the new generation of methods that exploit LLMs for KG prediction uses an external knowledge source (i.e., LLMs, see Section 4) to predict new links. Not only is this an obviously promising step to take in an inductive setting, but it also holds the promise of taking LP beyond simply extrapolating the structural patterns that are already present in the KG. After all, methods that merely extrapolate the existing patterns in a KG are likely to simply replicate the sources of the incompleteness of the KG and are thus unlikely to actually solve the problem that was motivating the KG completion task in the first place. It is important to observe that LLMs are by no means the only useful source of external knowledge that can be injected into the KG completion process. Other KGs can also provide such background knowledge, leading to an interesting blurring between the tasks of KG linking and KG completion [4]. Recent work on exploiting the temporal evaluation of a KG as the source of information is another example of using information outside the KG for KG completion.

7.2.Recommendations

The above critical reflections lead to several recommendations for future directions.

Towards semantic embeddings If the work on knowledge graph completion is to go beyond simply data graph completion, the effort will need to focus on including the semantics of the KG. This can be as simply accounting for known relations between types such as the subsumption hierarchy or known disjointness relations between types to more sophisticated reasoning about the algebraic properties of roles (symmetry, anti-symmetry, transitivity, etc.), or properties such as minimal and maximal cardinality of roles. The methods discussed in Section 5.2 still need to be extended to take into account these semantics.

Use of external resources We also believe that this will have to go hand in hand with a willingness to develop KG completion algorithms that take into account knowledge sources beyond the original KG. These knowledge sources can be (i) the updated versions of the KG themselves leading to the necessity to build dynamic KG embedding algorithms, (ii) building KG embedding algorithms containing temporal information about facts making it more precise and updatable for considering time-sensitive facts, (iii) rules to encode domain-specific constraints and capture complex relationships and structural patterns in the KG that may not be expressed with logical axioms as the ones presented in Table 3, (iv) using multiple knowledge graphs to integrate more complete knowledge about the facts represented in a KG, and (v) last but not least, LLMS which contain a vast amount of background knowledge.

Datasets and evaluation Evaluation datasets need to grow larger in size and more diverse in nature, and perhaps be more representative of real-world downstream tasks. This goes hand-in-hand with the first two recommendations. More methodological hygiene needs to be practiced in both the protocols for evaluations (e.g. hyper-parameter settings), as well as the reporting about these settings. These future recommendations come with different kinds of challenges such as larger datasets on which the embeddings should be computed needing huge computational resources along with hallucination problems as is the case with LLMs. In addition to the dataset and experimental settings, alternative metrics will need to be developed that better reflect the actual value of the predicted links for downstream tasks.

References

[1] 

M. Ali, M. Berrendorf, M. Galkin, V. Thost, T. Ma, V. Tresp and J. Lehmann, Improving inductive link prediction using hyper-relational facts, in: SEMWEB, (2021) .

[2] 

M. Ali, M. Berrendorf, C.T. Hoyt, L. Vermue, M. Galkin, S. Sharifzadeh, A. Fischer, V. Tresp and J. Lehmann, Bringing light into the dark: A large-scale evaluation of knowledge graph embedding models under a unified framework, IEEE Trans. Pattern Anal. Mach. Intell. 44: (12) ((2022) ), 8825–8845. doi:10.1109/TPAMI.2021.3124805.

[3] 

Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Lovenia, Z. Ji, T. Yu, W. Chung, Q.V. Do, Y. Xu and P. Fung, A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity, CoRR, (2023) . arXiv:2302.04023. doi:10.48550/arXiv.2302.04023.

[4] 

M. Baumgartner, D. Dell’Aglio, H. Paulheim and A. Bernstein, Towards the web of embeddings: Integrating multiple knowledge graph embedding spaces with FedCoder, J. Web Semant. 75: ((2023) ), 100741, https://doi.org/10.1016/j.websem.2022.100741. doi:10.1016/j.websem.2022.100741.

[5] 

R. Biswas, J. Portisch, H. Paulheim, H. Sack and M. Alam, Entity type prediction leveraging graph walks and entity descriptions, in: International Semantic Web Conference (ISWC), (2022) .

[6] 

R. Biswas, H. Sack and M. Alam, MADLINK: Attentive multihop and entity descriptions for link prediction in knowledge graphs, Semantic Web Journal ((2022) ).

[7] 

R. Biswas, R. Sofronova, H. Sack and M. Alam, Cat2Type: Wikipedia category embeddings for entity typing in knowledge graphs, in: K-CAP: Knowledge Capture Conference, (2021) .

[8] 

A. Bordes, N. Usunier, A. García-Durán, J. Weston and O. Yakhnenko, Translating embeddings for modeling multi-relational data, in: Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a Meeting Held December 5–8, 2013, Lake Tahoe, Nevada, United States, C.J.C. Burges, L. Bottou, Z. Ghahramani and K.Q. Weinberger, eds, (2013) , pp. 2787–2795, https://proceedings.neurips.cc/paper/2013/hash/1cecc7a77928ca8133fa24680a88d2f9-Abstract.html.

[9] 

A. Bordes, J. Weston, R. Collobert and Y. Bengio, Learning structured embeddings of knowledge bases, in: Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI, W. Burgard and D. Roth, eds, AAAI Press, (2011) , http://www.aaai.org/ocs/index.php/AAAI/AAAI11/paper/view/3659.

[10] 

S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y.T. Lee, Y. Li, S.M. Lundberg, H. Nori, H. Palangi, M.T. Ribeiro and Y. Zhang, Sparks of artificial general intelligence: early experiments with GPT-4, CoRR, (2023) . arXiv:2303.12712. doi:10.48550/arXiv.2303.12712.

[11] 

C. Chen, Y. Wang, B. Li and K. Lam, Knowledge is flat: A Seq2Seq generative framework for various knowledge graph completion, in: Proceedings of the 29th International Conference on Computational Linguistics, COLING, N. Calzolari, C. Huang, H. Kim, J. Pustejovsky, L. Wanner, K. Choi, P. Ryu, H. Chen, L. Donatelli, H. Ji, S. Kurohashi, P. Paggio, N. Xue, S. Kim, Y. Hahm, Z. He, T.K. Lee, E. Santus, F. Bond and S. Na, eds, International Committee on Computational Linguistics, (2022) , pp. 4005–4017, https://aclanthology.org/2022.coling-1.352.

[12] 

J. Chen, P. Hu, E. Jiménez-Ruiz, O.M. Holter, D. Antonyrajah and I. Horrocks, OWL2Vec*: Embedding of OWL ontologies, Mach. Learn. 110: (7) ((2021) ), 1813–1845. doi:10.1007/s10994-021-05997-6.

[13] 

Y. Dai, S. Wang, N.N. Xiong and W. Guo, A survey on knowledge graph embedding: Approaches, applications and benchmarks, Electronics ((2020) ).

[14] 

C. d’Amato, N.F. Quatraro and N. Fanizzi, Injecting background knowledge into embedding models for predictive tasks on knowledge graphs, in: The Semantic Web, R. Verborgh, K. Hose, H. Paulheim, P.-A. Champin, M. Maleshkova, O. Corcho, P. Ristoski and M. Alam, eds, Springer International Publishing, Cham, (2021) , pp. 441–457. ISBN 978-3-030-77385-4. doi:10.1007/978-3-030-77385-4_26.

[15] 

D. Daza, M. Cochez and P. Groth, Inductive entity representations from text via link prediction, in: WWW ’21: The Web Conference 2021, Virtual Event, Ljubljana, Slovenia, April 19–23, 2021, J. Leskovec, M. Grobelnik, M. Najork, J. Tang and L. Zia, eds, ACM/IW3C2, (2021) , pp. 798–808. doi:10.1145/3442381.3450141.

[16] 

D. Daza, M. Cochez and P.T. Groth, Inductive entity representations from text via link prediction, in: Proceedings of the Web Conference 2021, (2021) .

[17] 

T. Dettmers, P. Minervini, P. Stenetorp and S. Riedel, Convolutional 2D knowledge graph embeddings, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), S.A. McIlraith and K.Q. Weinberger, eds, AAAI Press, (2018) , pp. 1811–1818, https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17366.

[18] 

T. Dettmers, P. Minervini, P. Stenetorp and S. Riedel, Convolutional 2d knowledge graph embeddings, in: Thirty-Second AAAI Conference on Artificial Intelligence, (2018) .

[19] 

J. Devlin, M. Chang, K. Lee and K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (2019) .

[20] 

K.P. Donnelly, SNOMED-CT: The advanced terminology and coding system for eHealth, Studies in health technology and informatics 121: ((2006) ), 279–290, https://api.semanticscholar.org/CorpusID:22491040.

[21] 

D.M. Dooley, E.J. Griffiths, G.P.S. Gosal, P.L. Buttigieg, R. Hoehndorf, M. Lange, L.M. Schriml, F.S.L. Brinkman and W.W.L. Hsiao, FoodOn: A harmonized food ontology to increase global food traceability, quality control and data integration, NPJ Science of Food 2: ((2018) ). https://api.semanticscholar.org/CorpusID:69455220. doi:10.1038/s41538-018-0032-6.

[22] 

M. Dragoni, T. Bailoni, R. Maimone and C. Eccher, HeLiS: An ontology for supporting healthy lifestyles, in: The Semantic Web – ISWC 2018 – 17th International Semantic Web Conference, Monterey, CA, USA, October 8–12, 2018, Proceedings, Part II, D. Vrandecic, K. Bontcheva, M.C. Suárez-Figueroa, V. Presutti, I. Celino, M. Sabou, L. Kaffee and E. Simperl, eds, Lecture Notes in Computer Science, Vol. 11137: , Springer, (2018) , pp. 53–69. doi:10.1007/978-3-030-00668-6_4.

[23] 

J. Feng, M. Huang, Y. Yang and X. Zhu, GAKE: Graph aware knowledge embedding, in: COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, Osaka, Japan, December 11–16, 2016, N. Calzolari, Y. Matsumoto and R. Prasad, eds, ACL, (2016) , pp. 641–651, https://aclanthology.org/C16-1062/.

[24] 

M. Galkin, M. Berrendorf and C.T. Hoyt, An open challenge for inductive link prediction on knowledge graphs, CoRR, (2022) . https://doi.org/10.48550/arXiv.2203.01520. arXiv:2203.01520. doi:10.48550/ARXIV.2203.01520.

[25] 

Y. Geng, J. Chen, Z. Chen, J.Z. Pan, Z. Ye, Z. Yuan, Y. Jia and H. Chen, OntoZSL: Ontology-enhanced zero-shot learning, in: WWW ’21: The Web Conference 2021, Virtual Event, Ljubljana, Slovenia, April 19–23, 2021, J. Leskovec, M. Grobelnik, M. Najork, J. Tang and L. Zia, eds, ACM/IW3C2, (2021) , pp. 3325–3336. doi:10.1145/3442381.3450042.

[26] 

Y. Geng, J. Chen, J.Z. Pan, M. Chen, S. Jiang, W. Zhang and H. Chen, Relational message passing for fully inductive knowledge graph completion, in: 39th IEEE International Conference on Data Engineering, ICDE 2023, Anaheim, CA, USA, April 3–7, 2023, IEEE, (2023) , pp. 1221–1233. doi:10.1109/ICDE55515.2023.00098.

[27] 

G.A. Gesese, M. Alam and H. Sack, LiterallyWikidata – a benchmark for knowledge graph completion using literals, in: ISWC, (2021) .

[28] 

G.A. Gesese, M. Alam and H. Sack, LiterallyWikidata – a benchmark for knowledge graph completion using literals, in: The Semantic Web – ISWC 2021 – 20th International Semantic Web Conference, ISWC 2021, Virtual Event, October 24–28, 2021, Proceedings, A. Hotho, E. Blomqvist, S. Dietze, A. Fokoue, Y. Ding, P.M. Barnaghi, A. Haller, M. Dragoni and H. Alani, eds, Lecture Notes in Computer Science, Vol. 12922: , Springer, (2021) , pp. 511–527. doi:10.1007/978-3-030-88361-4_30.

[29] 

G.A. Gesese, R. Biswas, M. Alam and H. Sack, A survey on knowledge graph embeddings with literals: Which model links better literal-ly?, Semantic Web 12: (4) ((2021) ), 617–647. doi:10.3233/SW-200404.

[30] 

G.A. Gesese, H. Sack and M. Alam, RAILD: Towards leveraging relation features for inductive link prediction in knowledge graphs, in: Proceedings of the 11th International Joint Conference on Knowledge Graphs, IJCKG 2022, Hangzhou, China, October 27–28, 2022, A. Artale, D. Calvanese, H. Wang and X. Zhang, eds, ACM, (2022) , pp. 82–90. doi:10.1145/3579051.3579066.

[31] 

T. Hamaguchi, H. Oiwa, M. Shimbo and Y. Matsumoto, Knowledge transfer for out-of-knowledge-base entities: A graph neural network approach, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, (2017) , pp. 1802–1808. doi:10.24963/ijcai.2017/250.

[32] 

W.L. Hamilton, R. Ying and J. Leskovec, Inductive representation learning on large graphs, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, (2017) , pp. 1025–1035. ISBN 9781510860964.

[33] 

J. Hao, M. Chen, W. Yu, Y. Sun and W. Wang, Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4–8, 2019, A. Teredesai, V. Kumar, Y. Li, R. Rosales, E. Terzi and G. Karypis, eds, ACM, (2019) , pp. 1709–1719. doi:10.1145/3292500.3330838.

[34] 

N. Hubert, P. Monnin, A. Brun and D. Monticolo, Sem@K: Is my knowledge graph embedding model semantic-aware? CoRR, (2023) . arXiv:2301.05601. doi:10.48550/arXiv.2301.05601.

[35] 

N. Hubert, P. Monnin and H. Paulheim, Beyond transduction: a survey on inductive, few shot, and zero shot link prediction in knowledge graphs, CoRR, (2023) . https://doi.org/10.48550/arXiv.2312.04997. arXiv:2312.04997. doi:10.48550/ARXIV.2312.04997.

[36] 

M. Jackermeier, J. Chen and I. Horrocks, Dual box embeddings for the description logic EL++, in: Proceedings of the International Conference on World Wide Web, ACM, (2024) .

[37] 

N. Jia, X. Cheng and S. Su, Improving knowledge graph embedding using locally and globally attentive relation paths, in: Advances in Information Retrieval – 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part I, J.M. Jose, E. Yilmaz, J. Magalhães, P. Castells, N. Ferro, M.J. Silva and F. Martins, eds, Lecture Notes in Computer Science, Vol. 12035: , Springer, (2020) , pp. 17–32. doi:10.1007/978-3-030-45439-5_2.

[38] 

H. Jin, L. Hou, J. Li and T. Dong, Attributed and predictive entity embedding for fine-grained entity typing in knowledge bases, in: International Conference on Computational Linguistics, (2018) .

[39] 

H. Jin, L. Hou, J. Li and T. Dong, Fine-grained entity typing via hierarchical multi graph convolutional networks, in: Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing, (2019) .

[40] 

B. Kim, T. Hong, Y. Ko and J. Seo, Multi-task learning for knowledge graph completion with pre-trained language models, in: Proceedings of the 28th International Conference on Computational Linguistics, COLING, D. Scott, N. Bel and C. Zong, eds, International Committee on Computational Linguistics, (2020) , pp. 1737–1743. doi:10.18653/v1/2020.coling-main.153.

[41] 

M. Kulmanov and R. Hoehndorf, Evaluating the effect of annotation size on measures of semantic similarity, J. Biomed. Semant. 8: (1) ((2017) ), 7:1–7:10, https://doi.org/10.1186/s13326-017-0119-z. doi:10.1186/S13326-017-0119-Z.

[42] 

M. Kulmanov, W. Liu-Wei, Y. Yan and R. Hoehndorf, EL embeddings: Geometric construction of models for the description logic EL++, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10–16, 2019, S. Kraus, ed., ijcai.org, (2019) , pp. 6103–6109. doi:10.24963/ijcai.2019/845.

[43] 

Y. Lin, Z. Liu, H. Luan, M. Sun, S. Rao and S. Liu, Modeling relation paths for representation learning of knowledge bases, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17–21, 2015, L. Màrquez, C. Callison-Burch, J. Su, D. Pighin and Y. Marton, eds, The Association for Computational Linguistics, (2015) , pp. 705–714. doi:10.18653/v1/D15-1082.

[44] 

Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer and V. Stoyanov, RoBERTa: A robustly optimized BERT pretraining approach, CoRR, (2019) . http://arxiv.org/abs/1907.11692 arXiv:1907.11692.

[45] 

X. Lv, Y. Lin, Y. Cao, L. Hou, J. Li, Z. Liu, P. Li and J. Zhou, Do pre-trained models benefit knowledge graph completion? A reliable evaluation and a reasonable approach, in: Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22–27, 2022, S. Muresan, P. Nakov and A. Villavicencio, eds, Association for Computational Linguistics, (2022) , pp. 3570–3581. https://doi.org/10.18653/v1/2022.findings-acl.282. doi:10.18653/v1/2022.findings-acl.282.

[46] 

S. Ma, J. Ding, W. Jia, K. Wang and M. Guo, TransT: Type-based multiple embedding representations for knowledge graph completion, in: Machine Learning and Knowledge Discovery in Databases – European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part I, M. Ceci, J. Hollmén, L. Todorovski, C. Vens and S. Dzeroski, eds, Lecture Notes in Computer Science, Vol. 10534: , Springer, (2017) , pp. 717–733. doi:10.1007/978-3-319-71249-9_43.

[47] 

C. Meilicke, M.W. Chekol, D. Ruffinelli and H. Stuckenschmidt, Anytime bottom-up rule learning for knowledge graph completion, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10–16, 2019, S. Kraus, ed., ijcai.org, (2019) , pp. 3137–3143. doi:10.24963/ijcai.2019/435.

[48] 

C. Meilicke, M. Fink, Y. Wang, D. Ruffinelli, R. Gemulla and H. Stuckenschmidt, Fine-grained evaluation of rule- and embedding-based systems for knowledge graph completion, in: SEMWEB, (2018) .

[49] 

A. Melo, H. Paulheim and J. Völker, Type prediction in RDF knowledge bases using hierarchical multilabel classification, in: WIMS, (2016) .

[50] 

S. Mondal, S. Bhatia and R. Mutharaju, EmEL++: Embeddings for EL++ description logic, in: Proceedings of the AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge Engineering (AAAI-MAKE 2021), Stanford University, Palo Alto, California, USA, March 22–24, 2021, A. Martin, K. Hinkelmann, H. Fill, A. Gerber, D. Lenat, R. Stolle and F. van Harmelen, eds, CEUR Workshop Proceedings, Vol. 2846: , CEUR-WS.org, (2021) , https://ceur-ws.org/Vol-2846/paper19.pdf.

[51] 

C. Moon, P. Jones and N.F. Samatova, Learning entity type embeddings for knowledge graph completion, in: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, Association for Computing Machinery, New York, NY, USA, (2017) , pp. 2215–2218. ISBN 9781450349185. doi:10.1145/3132847.3133095.

[52] 

C.J. Mungall, C. Torniai, G.V. Gkoutos, S.E. Lewis and M.A. Haendel, Uberon, an integrative multi-species anatomy ontology, Genome Biology 13: ((2012) ), R5–R5. https://api.semanticscholar.org/CorpusID:15453742. doi:10.1186/gb-2012-13-1-r5.

[53] 

D.Q. Nguyen, T.D. Nguyen, D.Q. Nguyen and D.Q. Phung, A novel embedding model for knowledge base completion based on convolutional neural network, in: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, M.A. Walker, H. Ji and A. Stent, eds, Association for Computational Linguistics, (2018) , pp. 327–333. doi:10.18653/v1/n18-2053.

[54] 

M. Nickel, V. Tresp and H. Kriegel, A three-way model for collective learning on multi-relational data, in: Proceedings of the 28th International Conference on Machine Learning, ICML, L. Getoor and T. Scheffer, eds, Omnipress, (2011) , pp. 809–816, https://icml.cc/2011/papers/438_icmlpaper.pdf.

[55] 

G. Niu, B. Li, Y. Zhang, S. Pu and J. Li, AutoETER: Automated entity type representation with relation-aware attention for knowledge graph embedding, in: Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16–20 November 2020, T. Cohn, Y. He and Y. Liu, eds, Findings of ACL, Vol. EMNLP 2020, Association for Computational Linguistics, (2020) , pp. 1172–1181. doi:10.18653/v1/2020.findings-emnlp.105.

[56] 

E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis et al., Bias in data-driven artificial intelligence systems – an introductory survey, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10: (3) ((2020) ), e1356.

[57] 

J.Z. Pan, S. Razniewski, J.-C. Kalo, S. Singhania, J. Chen, S. Dietze, H. Jabeen, J. Omeliyanenko, W. Zhang, M. Lissandrini, R. Biswas, G. de Melo, A. Bonifati, E. Vakaj, M. Dragoni and D. Graux, Large language models and knowledge graphs: Opportunities and challenges, Transactions on Graph Data and Knowledge 1: (1) ((2023) ), 2:1–2:38. doi:10.4230/TGDK.1.1.2.

[58] 

S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang and X. Wu, Unifying large language models and knowledge graphs: A roadmap, IEEE Transactions on Knowledge and Data Engineering ((2024) ).

[59] 

H. Paulheim, Knowledge graph refinement: A survey of approaches and evaluation methods, Semantic Web 8: (3) ((2017) ), 489–508. doi:10.3233/SW-160218.

[60] 

H. Paulheim and C. Bizer, Type inference on noisy RDF data, in: ISWC, (2013) .

[61] 

X. Peng, Z. Tang, M. Kulmanov, K. Niu and R. Hoehndorf, Description logic EL++ embeddings with intersectional closure, CoRR, (2022) . https://arxiv.org/abs/2202.14018 arXiv:2202.14018.

[62] 

R.P. Pole, The GALEN High Level Ontology, (1996) , https://api.semanticscholar.org/CorpusID:62738916.

[63] 

P. Qin, X. Wang, W. Chen, C. Zhang, W. Xu and W.Y. Wang, Generative adversarial zero-shot relational learning for knowledge graphs, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34: , (2020) , pp. 8673–8680.

[64] 

P. Ristoski and H. Paulheim, RDF2Vec: RDF graph embeddings for data mining, in: The Semantic Web – ISWC 2016 – 15th International Semantic Web Conference, Kobe, Japan, October 17–21, 2016, Proceedings, Part I, P. Groth, E. Simperl, A.J.G. Gray, M. Sabou, M. Krötzsch, F. Lécué, F. Flöck and Y. Gil, eds, Lecture Notes in Computer Science, Vol. 9981: , (2016) , pp. 498–514. doi:10.1007/978-3-319-46523-4_30.

[65] 

S. Rudolph, Foundations of description logics, in: Reasoning Web International Summer School, Springer, (2011) , pp. 76–136.

[66] 

D. Ruffinelli, S. Broscheit and R. Gemulla, You CAN teach an old dog new tricks! On training knowledge graph embeddings, in: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020, OpenReview.net, (2020) . https://openreview.net/forum?id=BkxSmlBFvr.

[67] 

A. Sadeghian, M. Armandpour, P. Ding and D.Z. Wang, DRUM: End-to-End Differentiable Rule Mining on Knowledge Graphs, (2019) .

[68] 

M.S. Schlichtkrull, T.N. Kipf, P. Bloem, R. van den Berg, I. Titov and M. Welling, Modeling relational data with graph convolutional networks, in: The Semantic Web – 15th International Conference, ESWC, Proceedings, A. Gangemi, R. Navigli, M. Vidal, P. Hitzler, R. Troncy, L. Hollink, A. Tordai and M. Alam, eds, Lecture Notes in Computer Science, Vol. 10843: , Springer, (2018) , pp. 593–607. doi:10.1007/978-3-319-93417-4_38.

[69] 

F.M. Suchanek and A.T. Luu, Knowledge bases and language models: Complementing forces, in: Rules and Reasoning – 7th International Joint Conference, RuleML+RR 2023, Oslo, Norway, September 18–20, 2023, Proceedings, Lecture Notes in Computer Science, Vol. 14244: , Springer, (2023) , pp. 3–15. doi:10.1007/978-3-031-45072-3_1.

[70] 

K. Sun, Y.E. Xu, H. Zha, Y. Liu and X.L. Dong, Head-to-Tail: How Knowledgeable are Large Language Models (LLM)? A.K.A. Will LLMs Replace Knowledge Graphs? CoRR, (2023) . https://doi.org/10.48550/arXiv.2308.10168. arXiv:2308.10168. doi:10.48550/ARXIV.2308.10168.

[71] 

Z. Sun, Z. Deng, J. Nie and J. Tang, RotatE: Knowledge graph embedding by relational rotation in complex space, in: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6–9, 2019, OpenReview.net, (2019) . https://openreview.net/forum?id=HkgEQnRqYQ.

[72] 

D. Szklarczyk, J.H. Morris, H. Cook, M. Kuhn, S. Wyder, M. Simonovic, A. Santos, N.T. Doncheva, A. Roth, P. Bork, L.J. Jensen and C. von Mering, The STRING database in 2017: Quality-controlled protein-protein association networks, made broadly accessible, Nucleic Acids Res 45: (Database–Issue) ((2017) ), D362–D368. https://doi.org/10.1093/nar/gkw937. doi:10.1093/NAR/GKW937.

[73] 

K. Teru, E. Denis and W. Hamilton, Inductive relation prediction by subgraph reasoning, in: Proceedings of the 37th International Conference on Machine Learning, H.D. III and A. Singh, eds, Proceedings of Machine Learning Research, Vol. 119: , PMLR, (2020) , pp. 9448–9457, https://proceedings.mlr.press/v119/teru20a.html.

[74] 

K. Toutanova and D. Chen, Observed versus latent features for knowledge base and text inference, in: Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality, (2015) .

[75] 

H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave and G. Lample, LLaMA: Open and efficient foundation language models, CoRR, (2023) . arXiv:2302.13971. doi:10.48550/arXiv.2302.13971.

[76] 

T. Trouillon, J. Welbl, S. Riedel, E. Gaussier and G. Bouchard, Complex embeddings for simple link prediction, in: ICML’16, JMLR.org, (2016) , pp. 2071–2080.

[77] 

B. Wang, T. Shen, G. Long, T. Zhou, Y. Wang and Y. Chang, Structure-augmented text representation learning for efficient knowledge graph completion, in: WWW ’21: The Web Conference, J. Leskovec, M. Grobelnik, M. Najork, J. Tang and L. Zia, eds, ACM/IW3C2, (2021) , pp. 1737–1748. doi:10.1145/3442381.3450043.

[78] 

P. Wang, J. Han, C. Li and R. Pan, Logic attention based neighborhood aggregation for inductive knowledge graph embedding, in: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’19/IAAI’19/EAAI’19, AAAI Press, (2019) . ISBN 978-1-57735-809-1. doi:10.1609/aaai.v33i01.33017152.

[79] 

Q. Wang, Z. Mao, B. Wang and L. Guo, Knowledge graph embedding: A survey of approaches and applications, IEEE Trans. Knowl. Data Eng. 29: (12) ((2017) ), 2724–2743. doi:10.1109/TKDE.2017.2754499.

[80] 

X. Wang, T. Gao, Z. Zhu, Z. Liu, J.-Z. Li and J. Tang, KEPLER: A unified model for knowledge embedding and pre-trained language representation, Transactions of the Association for Computational Linguistics 9: ((2021) ), 176–194. doi:10.1162/tacl_a_00360.

[81] 

Y. Wang, H. Wang, J. He, W. Lu and S. Gao, TAGAT: Type-aware graph attention neTworks for reasoning over knowledge graphs, Knowl. Based Syst. 233: ((2021) ), 107500. doi:10.1016/j.knosys.2021.107500.

[82] 

Z. Wang, J. Zhang, J. Feng and Z. Chen, Knowledge graph embedding by translating on hyperplanes, in: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec City, Québec, Canada, July 27–31, 2014, C.E. Brodley and P. Stone, eds, AAAI Press, (2014) , pp. 1112–1119, http://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8531.

[83] 

T. Weller and M. Acosta, Predicting instance type assertions in knowledge graphs using stochastic neural networks, in: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, (2021) , pp. 2111–2118.

[84] 

W.X. Wilcke, P. Bloem, V. de Boer, R.H. van’t Veer and F. van Harmelen, End-to-end entity classification on multimodal knowledge graphs, arXiv, (2020) .

[85] 

R. Xie, Z. Liu, J. Jia, H. Luan and M. Sun, Representation learning of knowledge graphs with entity descriptions, Proceedings of the AAAI Conference on Artificial Intelligence 30: (1) ((2016) ). https://ojs.aaai.org/index.php/AAAI/article/view/10329.

[86] 

R. Xie, Z. Liu and M. Sun, Representation learning of knowledge graphs with hierarchical types, in: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI, S. Kambhampati, ed., IJCAI/AAAI Press, (2016) , pp. 2965–2971, http://www.ijcai.org/Abstract/16/421.

[87] 

R. Xie, Z. Liu and M. Sun, Representation learning of knowledge graphs with hierarchical types, in: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9–15 July 2016, S. Kambhampati, ed., IJCAI/AAAI Press, (2016) , pp. 2965–2971. http://www.ijcai.org/Abstract/16/421.

[88] 

X. Xie, N. Zhang, Z. Li, S. Deng, H. Chen, F. Xiong, M. Chen and H. Chen, From discrimination to generation: Knowledge graph completion with generative transformer, in: Companion of the Web Conference, F. Laforest, R. Troncy, E. Simperl, D. Agarwal, A. Gionis, I. Herman and L. Médini, eds, ACM, (2022) , pp. 162–165.

[89] 

B. Xiong, N. Potyka, T. Tran, M. Nayyeri and S. Staab, Faithful embeddings for EL++ knowledge bases, in: The Semantic Web – ISWC 2022 – 21st International Semantic Web Conference, Virtual Event, October 23–27, 2022, Proceedings, U. Sattler, A. Hogan, C.M. Keet, V. Presutti, J.P.A. Almeida, H. Takeda, P. Monnin, G. Pirrò and C. d’Amato, eds, Lecture Notes in Computer Science, Vol. 13489: , Springer, (2022) , pp. 22–38. doi:10.1007/978-3-031-19433-7_2.

[90] 

B. Xiong, N. Potyka, T. Tran, M. Nayyeri and S. Staab, Faithful embeddings for EL++) knowledge bases, in: The Semantic Web – ISWC 2022 – 21st International Semantic Web Conference, Virtual Event, October 23–27, 2022, Proceedings, U. Sattler, A. Hogan, C.M. Keet, V. Presutti, J.P.A. Almeida, H. Takeda, P. Monnin, G. Pirrò and C. d’Amato, eds, Lecture Notes in Computer Science, Vol. 13489: , Springer, (2022) , pp. 22–38. doi:10.1007/978-3-031-19433-7_2.

[91] 

W. Xiong, M. Yu, S. Chang, X. Guo and W.Y. Wang, One-shot relational learning for knowledge graphs, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31–November 4, 2018, E. Riloff, D. Chiang, J. Hockenmaier and J. Tsujii, eds, Association for Computational Linguistics, (2018) , pp. 1980–1990, https://doi.org/10.18653/v1/d18-1223. doi:10.18653/v1/D18-1223.

[92] 

B. Xu, Y. Zhang, J. Liang, Y. Xiao, S. Hwang and W. Wang, Cross-lingual type inference, in: International Conference Database Systems for Advanced Applications, DASFAA, (2016) .

[93] 

J. Xu, X. Qiu, K. Chen and X. Huang, Knowledge graph representation with jointly structural and textual encoding, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19–25, 2017, C. Sierra, ed., ijcai.org, (2017) , pp. 1318–1324. doi:10.24963/ijcai.2017/183.

[94] 

Y. Yaghoobzadeh, H. Adel and H. Schütze, Corpus-level fine-grained entity typing, J. Artif. Intell. Res. ((2018) ).

[95] 

Y. Yaghoobzadeh and H. Schütze, Multi-level representations for fine-grained typing of knowledge base entities, in: Conference of the European Chapter of the Association for Computational Linguistics, (2017) .

[96] 

B. Yang, W. Yih, X. He, J. Gao and L. Deng, Embedding entities and relations for learning and inference in knowledge bases, in: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, eds, (2015) . http://arxiv.org/abs/1412.6575.

[97] 

F. Yang, Z. Yang and W.W. Cohen, Differentiable learning of logical rules for knowledge base reasoning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, (2017) , pp. 2316–2325. ISBN 9781510860964.

[98] 

L. Yao, C. Mao and Y. Luo, KG-BERT: BERT for knowledge graph completion, CoRR, (2019) . arXiv:1909.03193. http://arxiv.org/abs/1909.03193.

[99] 

R. Zhang, F. Kong, C. Wang and Y. Mao, Embedding of hierarchically typed knowledge bases, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), S.A. McIlraith and K.Q. Weinberger, eds, AAAI Press, (2018) , pp. 2046–2053.

[100] 

Y. Zhao, Z. Li, W. Deng, R. Xie and Q. Li, Learning entity type structured embeddings with trustworthiness on noisy knowledge graphs, Knowl. Based Syst. 215: ((2021) ), 106630. doi:10.1016/j.knosys.2020.106630.

[101] 

Y. Zhao, A. Zhang, R. Xie, K. Liu and X. Wang, Connecting embeddings for knowledge graph entity typing, in: Annual Meeting of the Association for Computational Linguistics, (2020) .

[102] 

F. Zhapa-Camacho and R. Hoehndorf, CatE: Embedding ALC ontologies using category-theoretical semantics, CoRR, (2023) . arXiv:2305.07163. doi:10.48550/arXiv.2305.07163.

[103] 

J. Zhuo, Q. Zhu, Y. Yue, Y. Zhao and W. Han, A neighborhood-attention fine-grained entity typing for knowledge graph completion, in: ACM International Conference on Web Search and Data Mining, (2022) .