Eli Schwartz, Leonid Karlinsky, et al.
NeurIPS 2018
Incorporating lexical knowledge from semantic resources (e.g., WordNet ) has been shown to improve the quality of distributed word representations. This knowledge often comes in the form of relational triplets (x, r, y) where words x and y are connected by a relation type r. Existing methods either ignore the relation types, essentially treating the word pairs as generic related words, or employ rather restrictive assumptions to model the relational knowledge. We propose a novel approach to model relational knowledge based on low-rank subspace regularization, and conduct experiments on standard tasks to evaluate its effectiveness.
Eli Schwartz, Leonid Karlinsky, et al.
NeurIPS 2018
Alan Akbik, Yunyao Li
ACL 2016
Barbara A. Han, Subhabrata Majumdar, et al.
Epidemics
Vidya Muthukumar, Tejaswini Pedapati, et al.
CVPRW 2019