top of page
Writer's pictureLayla

Knowledge graphs constructed by LTM does not seem really accurate. Is that a problem?

Updated: Mar 30

The more observant will notice that the knowledge graph constructed by the long-term memory app after ingesting a conversation is not really accurate.


This is not as big a problem as one initially believes.


The knowledge graph is constructed via "embeddings", not "words". In other words, it is a machine representation of the knowledge in the conversation shard, which may not necessarily make sense to humans. The knowledge graph corresponds to the L1 cache in LTM, which are used as more of a "heuristic" rather than an actual "subject -> entity" relation. These words and relations are what the LLM thinks are key concepts. They are primarily used to access the L2 cache, which contains factual summaries of the conversation shard.


So, in short, don't fret too much if the entity relations doesn't seem make sense - they do - they make sense to the LLM.


example knowledge graph of long term memory

124 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page