Ask HN: Anyone using knowledge graphs for LLM agent memory/context management?

5 points by mbbah 5 hours ago

I’m building infrastructure for LLM agents and copilots that need to reason and operate over time—not just in single prompts.

One core challenge I keep hitting: managing evolving memory and context. RAG works for retrieval, and scratchpads are fine for short-term reasoning—but once agents need to maintain structured knowledge, track state, or coordinate multi-step tasks, things get messy fast; the context becomes less and less interpretable.

I’m experimenting with a shared memory layer built on a knowledge graph:

  - Agents can ingest structured/unstructured data into it

  - Memory updates dynamically as agents act

  - Devs can observe, query, and refine the graph.

  - It supports high-level task modeling and dependency tracking (pre/postconditions)
My questions: - Are you building agents that need persistent memory or task context?

  - Have you tried structured memory (graphs, JSON stores, etc.) or stuck with embeddings/scratchpads?

  - Would something like a graph-based memory actually help, or is it overkill for most real-world use?
I’m in the thick of validating this idea and would love to hear what’s working (or breaking) for others building with LLMs today.

Thanks in advance HNers!

frenchmajesty 2 hours ago

Funny you should ask I just ended up here googling "graph memory LLM"

So yea I'm very much looking into it. I want my personal agent to grow to know me over time and my life is not bunch of disparate points spread out across a vector space. Rather It's millions of nodes and edges that connects key things. Who my parents were, where I grew up, what I like to do for fun and how it ties into my personality and strengths, etc...

To have this represented in a graph which a model can then explore would allow it to make implicit connections much easier than attempting the same with embeddings.