I feel like you can do the same using a single markdown file and an LLM (e.g. Claude Code).
I do it that way and then I hooked it up with the Telegram API. I’m able to ask things like “What’s my passport number?” and it just works.
Combine it with git and you have a Datomic-esque way of seeing facts getting added and retracted simply by traversing the commits.
I arrived to the solution after trying more complex triplets-based approach and seeing that plain text-files + HTTP calls work as good and are human (and AI) friendly.
The main disadvantage is having unstructured data, but for content that fits inside the LLM context window, it doesn’t matter practically speaking. And even then, when context starts being the limiting factor, you can start segmenting by categories or start using embeddings.
ianbicking 2 hours ago [-]
I feel like I should understand the purpose of knowledge graphs, but I just... don't.
Like the example "CocoIndex supports Incremental Processing" becomes the subject/predicate/object triple (CocoIndex, supports, Incremental Processing)... so what? Are you going to look up "Incremental Processing" and get a list of related entities? That's not a term that is well enough defined to be meaningful across a variety of subjects. I can incrementally process my sandwich by taking small bites.
I guess you could actually expand "Incremental Processing" to some full definition. But then it's not really a knowledge graph because the only entity ever associated with that new definition will be CocoIndex, and you are back to a single sentence that contains the information, you've just pretended it's structured. ("Supports" hardly a well-defined term either!)
I can _kind of_ see how knowledge graphs can be used for limited relationships. If you want to map companies to board members, and board members to family members, etc. Very clearly and formally defined entities (like a person or company), with clearly defined relationships (board member, brother, etc). I still don't know how _useful_ the result is, but at least I can understand the validity of the model. But for everything else... am I missing something?
alexchantavy 2 hours ago [-]
IMO knowledge graphs are a must have for security use-cases because of how well they handle many-to-many relationships. Who has access to read each storage bucket? Via which IAM policies? Who owns each bucket? What is the shortest possible role-assumption path available from internet-exposed compute instances to read this bucket? What is the effective blast radius from a vulnerability that allows remote code execution on an internet exposed compute instance?
Or, I have a docker container image that is built from multiple base images owned by different teams in my organization. Who is responsible for fixing security vulnerabilities introduced by each layer?
We really could model these as tables but getting into all those joins makes things so cumbersome. Plus visualizing these things in a graph map is very compelling for presentation and persuading stakeholders to make security decisions.
badmonster 56 minutes ago [-]
In my understanding, there are two kinds of use cases potentially can be explored with knowledge graph.
- Structured data - this is probably more close to the use case you mention
- Unstructure data and extract relationship and build KG with natural language understanding - which is this article trying to explore. Here is a paper discussing about this https://arxiv.org/abs/2409.13731
In general it is an alternative way to establish connections with entities easily. And these relationships could help with discovery, recommendation and retrieval. Thanks @alexchantavy for sharing use-cases in security.
Would love to learn more from the community :)
42 minutes ago [-]
th0ma5 4 hours ago [-]
People probably don't discuss the problems enough about an open world knowledge graph. Essentially the same class of problems as spam filters. Using an open language model to produce a graph doesn't create a closed world graph by definition. This confusion as well as just general avoidance of measuring actual productivity outcomes seems like an insurmountable problem in knowledge world now and I feel language itself is failing at times to educate on this issues.
lyu07282 2 hours ago [-]
They don't even do any entity disambiguation, the resulting graph won't be very useful indeed. I also saw people then use a different prompt to generate a cypher query from user input for RAG, I can't imagine that actually works well. It would make a little more sense if they then use knowledge graph embeddings, but I'm not sure if neo4j supports that.
gorpy7 6 hours ago [-]
idk if it’s precisely the same but o3 recently offered to create one for me in, was it markdown?, recently. suggesting it was something it was willing to maintain for me.
cipehr 6 hours ago [-]
sorry, what is `o3`? I am not familiar with it... unless you're talking about the open api chat gpt model?
If so thats crazy, and I would love pointers on how to prompt it to suggest this?
i think it offered a few formats but specifically remember it would do it in obsidian to use concept map ability within.
marviel 5 hours ago [-]
mermaid probably.
manishsharan 4 hours ago [-]
Why not merely upload all relevant documents into Gemini? Split the knowledge into smaller knowledge domains and have agents ( backed by Gemini) for each domain?
Frummy 4 hours ago [-]
Now imagine it with theorems as entities and lean proofs as relationships
8thcross 3 hours ago [-]
building knowledge graphs (GrahRAGs) are obsolete from a acamedic and technical point of view. LLMs are getting better with built in graph networks capable algorithms like SONAR and knowledge embeddings. like someone said - just use Notebook LM instead. But, they are useful in corporate setup when the infrastructure,teams and skills are lagging by years.
phren0logy 3 hours ago [-]
My use case is for documents related to a legal issue, where a foundation model has no knowledge of any of the participants or particular issues. There are many, many such situations. Your statement is ignorant and overly broad.
timfrazer 3 hours ago [-]
Could you provide some academic proofs from what I read this isn’t true so I’d be interested to see what you’re referring to
I do it that way and then I hooked it up with the Telegram API. I’m able to ask things like “What’s my passport number?” and it just works.
Combine it with git and you have a Datomic-esque way of seeing facts getting added and retracted simply by traversing the commits.
I arrived to the solution after trying more complex triplets-based approach and seeing that plain text-files + HTTP calls work as good and are human (and AI) friendly.
The main disadvantage is having unstructured data, but for content that fits inside the LLM context window, it doesn’t matter practically speaking. And even then, when context starts being the limiting factor, you can start segmenting by categories or start using embeddings.
Like the example "CocoIndex supports Incremental Processing" becomes the subject/predicate/object triple (CocoIndex, supports, Incremental Processing)... so what? Are you going to look up "Incremental Processing" and get a list of related entities? That's not a term that is well enough defined to be meaningful across a variety of subjects. I can incrementally process my sandwich by taking small bites.
I guess you could actually expand "Incremental Processing" to some full definition. But then it's not really a knowledge graph because the only entity ever associated with that new definition will be CocoIndex, and you are back to a single sentence that contains the information, you've just pretended it's structured. ("Supports" hardly a well-defined term either!)
I can _kind of_ see how knowledge graphs can be used for limited relationships. If you want to map companies to board members, and board members to family members, etc. Very clearly and formally defined entities (like a person or company), with clearly defined relationships (board member, brother, etc). I still don't know how _useful_ the result is, but at least I can understand the validity of the model. But for everything else... am I missing something?
Or, I have a docker container image that is built from multiple base images owned by different teams in my organization. Who is responsible for fixing security vulnerabilities introduced by each layer?
We really could model these as tables but getting into all those joins makes things so cumbersome. Plus visualizing these things in a graph map is very compelling for presentation and persuading stakeholders to make security decisions.
- Structured data - this is probably more close to the use case you mention
- Unstructure data and extract relationship and build KG with natural language understanding - which is this article trying to explore. Here is a paper discussing about this https://arxiv.org/abs/2409.13731
In general it is an alternative way to establish connections with entities easily. And these relationships could help with discovery, recommendation and retrieval. Thanks @alexchantavy for sharing use-cases in security.
Would love to learn more from the community :)
If so thats crazy, and I would love pointers on how to prompt it to suggest this?