Decoding on Graphs (DoG) is a novel framework that integrates Large Language Models (LLMs) with Knowledge Graphs (KGs) to enhance question-answering (QA) tasks. The key innovation is the concept of a "well-formed chain", a sequence of interconnected fact triplets from the KG that starts with entities mentioned in the question and leads logically to the answer.
To ensure that LLMs generate these chains faithfully, DoG introduces graph-aware constrained decoding, which dynamically restricts the LLM's token generation based on the KG's topology. This is achieved using a trie data structure that maintains valid token sequences corresponding to the KG's paths, ensuring that each generated reasoning step aligns with actual KG relations and entities.
Additionally, DoG employs beam search execution at the triplet level to explore multiple plausible reasoning paths simultaneously, enhancing the robustness and accuracy of the answers. By constraining the LLM's decoding process without altering its underlying parameters or requiring additional training, DoG effectively leverages the LLM's inherent reasoning capabilities while grounding its outputs in the structured knowledge of the KG.
Experimental results demonstrate that DoG outperforms existing methods on several KGQA benchmarks, particularly in complex multi-hop reasoning scenarios, highlighting its effectiveness and adaptability across different KGs and LLMs.
all rights w/ authors:
Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs
through Generation of Well-Formed Chains
[ Ссылка ]
Nice Idea by @mit and Univ of Hong Kong
00:00 Augment LLMs with Knowledge Graphs
00:47 Subgraph retrievers
02:03 Agents for Integrating LLM and KG
04:25 NEW IDEA by MIT & HK Univ
08:33 Example of Decode on Graphs
15:21 Implementation PROMPT DoG
17:20 Linear graph forms
19:43 Graph aware constrained decoding
24:20 Harvard MED Agents for LLM on KG
#aiagents
#airesearch
#artificialintelligence
#massachusettsinstituteoftechnology
Ещё видео!