This is a linkpost for https://arxiv.org/abs/2209.00445

In this paper, we present a novel method of understanding embeddings by transforming embedding space into a comprehensible conceptual space. We present an algorithm for deriving a conceptual space with dynamic on-demand granularity. We also show a method for transferring any vector in the original incomprehensible space to an understandable vector in the conceptual space. We combine human tests with cross-model tests to show that the conceptualized vectors indeed represent the semantics of the original vectors. We also show the use of our method for various tasks, including comparing the semantics of alternative models.

 

The method works as follows:

We have thus defined a meta-algorithm CES (Conceptualizing Embedding Spaces) that, for any given embedding method f, a set of concepts C and a mapping function τ from concepts to text, takes a vector in the latent space L and returns a vector in the conceptual space C. 

In figure 1 we can see the outline of the method.

The conceptual set C comes from a hierarchical ontology, thus allowing a conceptual representation in various levels of abstraction.

In our work, we also introduce a technique for creating a conceptual space that can be adjusted in granularity as needed. This allows for a more precise and adaptable conceptual space to be generated based on a given text. By implementing this approach, we are able to produce a conceptual space with varying degrees of abstraction on different topics.

To demonstrate the effectiveness of our method, we provide examples that showcase its comprehensibility and intuitiveness. Specifically, we utilize a sentence embedding model known as SRoBERTa as our f. The outcomes of our experiments are presented in the next table, where we show the top concepts for each sentence.

In our paper, we conduct a variety of tests to demonstrate that CES effectively captures the f embedding. These tests encompass a range of experiments, including human evaluations and cross-model comparisons.

Furthermore, we present several practical applications of CES, such as a method for comparing different models and generating visual representations of text. Additionally, we explore the potential of applying CES to the layers of LLM models, which can provide valuable insights into the evolution of the embedding space across different layers.

Below, we provide an illustration that demonstrates CES on the embedding space between different models, as well as a visualization of changes in the BERT embedding throughout various layers, using the text "Manhattan Project" as an example.

Our method has demonstrated promising potential in interpreting model embeddings through an intuitive conceptual space.

However, further testing is required to evaluate different mapping functions for converting the latent space into the conceptual space. 

New Comment