HugeGraph-AI

hugegraph-ai
integrates HugeGraph with artificial intelligence capabilities, providing comprehensive support for developers to build AI-powered graph applications.
✨ Key Features
- GraphRAG: Build intelligent question-answering systems with graph-enhanced retrieval
- Knowledge Graph Construction: Automated graph building from text using LLMs
- Graph ML: Integration with 20+ graph learning algorithms (GCN, GAT, GraphSAGE, etc.)
- Python Client: Easy-to-use Python interface for HugeGraph operations
- AI Agents: Intelligent graph analysis and reasoning capabilities
🚀 Quick Start
[!NOTE]
For a complete deployment guide and detailed examples, please refer to hugegraph-llm/README.md
Prerequisites
- Python 3.9+ (3.10+ recommended for hugegraph-llm)
- uv (recommended package manager)
- HugeGraph Server 1.3+ (1.5+ recommended)
- Docker (optional, for containerized deployment)
Option 1: Docker Deployment (Recommended)
# Clone the repository
git clone https://github.com/apache/incubator-hugegraph-ai.git
cd incubator-hugegraph-ai
# Set up environment and start services
cp docker/env.template docker/.env
# Edit docker/.env to set your PROJECT_PATH
cd docker
docker-compose -f docker-compose-network.yml up -d
# Access services:
# - HugeGraph Server: http://localhost:8080
# - RAG Service: http://localhost:8001
Option 2: Source Installation
# 1. Start HugeGraph Server
docker run -itd --name=server -p 8080:8080 hugegraph/hugegraph
# 2. Clone and set up the project
git clone https://github.com/apache/incubator-hugegraph-ai.git
cd incubator-hugegraph-ai/hugegraph-llm
# 3. Install dependencies
uv venv && source .venv/bin/activate
uv pip install -e .
# 4. Start the demo
python -m hugegraph_llm.demo.rag_demo.app
# Visit http://127.0.0.1:8001
Basic Usage Examples
GraphRAG - Question Answering
from hugegraph_llm.operators.graph_rag_task import RAGPipeline
# Initialize RAG pipeline
graph_rag = RAGPipeline()
# Ask questions about your graph
result = (graph_rag
.extract_keywords(text="Tell me about Al Pacino.")
.keywords_to_vid()
.query_graphdb(max_deep=2, max_graph_items=30)
.synthesize_answer()
.run())
Knowledge Graph Construction
from hugegraph_llm.models.llms.init_llm import LLMs
from hugegraph_llm.operators.kg_construction_task import KgBuilder
# Build KG from text
TEXT = "Your text content here..."
builder = KgBuilder(LLMs().get_chat_llm())
(builder
.import_schema(from_hugegraph="hugegraph")
.chunk_split(TEXT)
.extract_info(extract_type="property_graph")
.commit_to_hugegraph()
.run())
Graph Machine Learning
from pyhugegraph.client import PyHugeClient
# Connect to HugeGraph and run ML algorithms
# See hugegraph-ml documentation for detailed examples
📦 Modules
Large language model integration for graph applications:
- GraphRAG: Retrieval-augmented generation with graph data
- Knowledge Graph Construction: Build KGs from text automatically
- Natural Language Interface: Query graphs using natural language
- AI Agents: Intelligent graph analysis and reasoning
Graph machine learning with 20+ implemented algorithms:
- Node Classification: GCN, GAT, GraphSAGE, APPNP, etc.
- Graph Classification: DiffPool, P-GNN, etc.
- Graph Embedding: DeepWalk, Node2Vec, GRACE, etc.
- Link Prediction: SEAL, GATNE, etc.
Python client for HugeGraph operations:
- Schema Management: Define vertex/edge labels and properties
- CRUD Operations: Create, read, update, delete graph data
- Gremlin Queries: Execute graph traversal queries
- REST API: Complete HugeGraph REST API coverage
📚 Learn More
🤝 Contributing
We welcome contributions! Please see our contribution guidelines for details.
Development Setup:
- Use GitHub Desktop for easier PR management
- Run
./style/code_format_and_analysis.sh
before submitting PRs - Check existing issues before reporting bugs

📄 License
hugegraph-ai is licensed under Apache 2.0 License.

1 - HugeGraph-LLM
Please refer to the AI repository README for the most up-to-date documentation, and the official website regularly is updated and synchronized.
Bridge the gap between Graph Databases and Large Language Models
AI summarizes the project documentation: 
🎯 Overview
HugeGraph-LLM is a comprehensive toolkit that combines the power of graph databases with large language models.
It enables seamless integration between HugeGraph and LLMs for building intelligent applications.
Key Features
- 🏗️ Knowledge Graph Construction - Build KGs automatically using LLMs + HugeGraph
- 🗣️ Natural Language Querying - Operate graph databases using natural language (Gremlin/Cypher)
- 🔍 Graph-Enhanced RAG - Leverage knowledge graphs to improve answer accuracy (GraphRAG & Graph Agent)
For detailed source code doc, visit our DeepWiki page. (Recommended)
📋 Prerequisites
[!IMPORTANT]
- Python: 3.10+ (not tested on 3.12)
- HugeGraph Server: 1.3+ (recommended: 1.5+)
- UV Package Manager: 0.7+
🚀 Quick Start
Choose your preferred deployment method:
Option 1: Docker Compose (Recommended)
The fastest way to get started with both HugeGraph Server and RAG Service:
# 1. Set up environment
cp docker/env.template docker/.env
# Edit docker/.env and set PROJECT_PATH to your actual project path
# 2. Deploy services
cd docker
docker-compose -f docker-compose-network.yml up -d
# 3. Verify deployment
docker-compose -f docker-compose-network.yml ps
# 4. Access services
# HugeGraph Server: http://localhost:8080
# RAG Service: http://localhost:8001
Option 2: Individual Docker Containers
For more control over individual components:
Available Images
hugegraph/rag
- Development image with source code accesshugegraph/rag-bin
- Production-optimized binary (compiled with Nuitka)
# 1. Create network
docker network create -d bridge hugegraph-net
# 2. Start HugeGraph Server
docker run -itd --name=server -p 8080:8080 --network hugegraph-net hugegraph/hugegraph
# 3. Start RAG Service
docker pull hugegraph/rag:latest
docker run -itd --name rag \
-v /path/to/your/hugegraph-llm/.env:/home/work/hugegraph-llm/.env \
-p 8001:8001 --network hugegraph-net hugegraph/rag
# 4. Monitor logs
docker logs -f rag
Option 3: Build from Source
For development and customization:
# 1. Start HugeGraph Server
docker run -itd --name=server -p 8080:8080 hugegraph/hugegraph
# 2. Install UV package manager
curl -LsSf https://astral.sh/uv/install.sh | sh
# 3. Clone and setup project
git clone https://github.com/apache/incubator-hugegraph-ai.git
cd incubator-hugegraph-ai/hugegraph-llm
# 4. Create virtual environment and install dependencies
uv venv && source .venv/bin/activate
uv pip install -e .
# 5. Launch RAG demo
python -m hugegraph_llm.demo.rag_demo.app
# Access at: http://127.0.0.1:8001
# 6. (Optional) Custom host/port
python -m hugegraph_llm.demo.rag_demo.app --host 127.0.0.1 --port 18001
Additional Setup (Optional)
# Download NLTK stopwords for better text processing
python ./hugegraph_llm/operators/common_op/nltk_helper.py
# Update configuration files
python -m hugegraph_llm.config.generate --update
[!TIP]
Check our Quick Start Guide for detailed usage examples and query logic explanations.
💡 Usage Examples
Knowledge Graph Construction
Interactive Web Interface
Use the Gradio interface for visual knowledge graph building:
Input Options:
- Text: Direct text input for RAG index creation
- Files: Upload TXT or DOCX files (multiple selection supported)
Schema Configuration:
- Custom Schema: JSON format following our template
- HugeGraph Schema: Use existing graph instance schema (e.g., “hugegraph”)

Programmatic Construction
Build knowledge graphs with code using the KgBuilder
class:
from hugegraph_llm.models.llms.init_llm import LLMs
from hugegraph_llm.operators.kg_construction_task import KgBuilder
# Initialize and chain operations
TEXT = "Your input text here..."
builder = KgBuilder(LLMs().get_chat_llm())
(
builder
.import_schema(from_hugegraph="talent_graph").print_result()
.chunk_split(TEXT).print_result()
.extract_info(extract_type="property_graph").print_result()
.commit_to_hugegraph()
.run()
)
Pipeline Workflow:
graph LR
A[Import Schema] --> B[Chunk Split]
B --> C[Extract Info]
C --> D[Commit to HugeGraph]
D --> E[Execute Pipeline]
style A fill:#fff2cc
style B fill:#d5e8d4
style C fill:#dae8fc
style D fill:#f8cecc
style E fill:#e1d5e7
Graph-Enhanced RAG
Leverage HugeGraph for retrieval-augmented generation:
from hugegraph_llm.operators.graph_rag_task import RAGPipeline
# Initialize RAG pipeline
graph_rag = RAGPipeline()
# Execute RAG workflow
(
graph_rag
.extract_keywords(text="Tell me about Al Pacino.")
.keywords_to_vid()
.query_graphdb(max_deep=2, max_graph_items=30)
.merge_dedup_rerank()
.synthesize_answer(vector_only_answer=False, graph_only_answer=True)
.run(verbose=True)
)
RAG Pipeline Flow:
graph TD
A[User Query] --> B[Extract Keywords]
B --> C[Match Graph Nodes]
C --> D[Retrieve Graph Context]
D --> E[Rerank Results]
E --> F[Generate Answer]
style A fill:#e3f2fd
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#fce4ec
style F fill:#e0f2f1
🔧 Configuration
After running the demo, configuration files are automatically generated:
- Environment:
hugegraph-llm/.env
- Prompts:
hugegraph-llm/src/hugegraph_llm/resources/demo/config_prompt.yaml
[!NOTE]
Configuration changes are automatically saved when using the web interface. For manual changes, simply refresh the page to load updates.
LLM Provider Support: This project uses LiteLLM for multi-provider LLM support.
📚 Additional Resources
- Graph Visualization: Use HugeGraph Hubble for data analysis and schema management
- API Documentation: Explore our REST API endpoints for integration
- Community: Join our discussions and contribute to the project
License: Apache License 2.0 | Community: Apache HugeGraph
2 - GraphRAG UI Details
Follow up main doc to introduce the basic UI function & details, welcome to update and improve at any time, thanks
1. Core Logic of the Project
Build RAG Index Responsibilities:
- Split and vectorize text
- Extract text into a graph (construct a knowledge graph) and vectorize the vertices
(Graph)RAG & User Functions Responsibilities:
- Retrieve relevant content from the constructed knowledge graph and vector database based on the query to supplement the prompt.
2. (Processing Flow) Build RAG Index
Construct a knowledge graph, chunk vector, and graph vid vector from the text.

graph TD;
A[Raw Text] --> B[Text Segmentation]
B --> C[Vectorization]
C --> D[Store in Vector Database]
A --> F[Text Segmentation]
F --> G[LLM extracts graph based on schema \nand segmented text]
G --> H[Store graph in Graph Database, \nautomatically vectorize vertices \nand store in Vector Database]
I[Retrieve vertices from Graph Database] --> J[Vectorize vertices and store in Vector Database \nNote: Incremental update]
- Doc(s): Input text
- Schema: The schema of the graph, which can be provided as a JSON-formatted schema or as the graph name (if it exists in the database).
- Graph Extract Prompt Header: The header of the prompt
- Output: Display results
Get RAG Info
Clear RAG Data
- Clear Chunks Vector Index: Clear chunk vector
- Clear Graph Vid Vector Index: Clear graph vid vector
- Clear Graph Data: Clear Graph Data
Import into Vector: Convert the text in Doc(s) into vectors (requires chunking the text first and then converting the chunks into vectors)
Extract Graph Data (1): Extract graph data from Doc(s) based on the Schema, using the Graph Extract Prompt Header and chunked content as the prompt
Load into GraphDB (2): Store the extracted graph data into the database (automatically calls Update Vid Embedding to store vectors in the vector database)
Update Vid Embedding: Convert graph vid into vectors
Execution Flow:
- Input text into the Doc(s) field.
- Click the Import into Vector button to split and vectorize the text, storing it in the vector database.
- Input the graph Schema into the Schema field.
- Click the Extract Graph Data (1) button to extract the text into a graph.
- Click the Load into GraphDB (2) button to store the extracted graph into the graph database (this automatically calls Update Vid Embedding to store the vectors in the vector database).
- Click the Update Vid Embedding button to vectorize the graph vertices and store them in the vector database.
3. (Processing Flow) (Graph)RAG & User Functions
The Import into Vector button in the previous module converts text (chunks) into vectors, and the Update Vid Embedding button converts graph vid into vectors. These vectors are stored separately to supplement the context for queries (answer generation) in this module. In other words, the previous module prepares the data for RAG (vectorization), while this module executes RAG.
This module consists of two parts:
- HugeGraph RAG Query
- (Batch) Back-testing
The first part handles single queries, while the second part handles multiple queries at once. Below is an explanation of the first part.

graph TD;
A[Question] --> B[Vectorize the question and search \nfor the most similar chunk in the Vector Database (chunk)]
A --> F[Extract keywords using LLM]
F --> G[Match vertices precisely in Graph Database \nusing keywords; perform fuzzy matching in \nVector Database (graph vid)]
G --> H[Generate Gremlin query using matched vertices and query with LLM]
H --> I[Execute Gremlin query; if successful, finish; if failed, fallback to BFS]
B --> J[Sort results]
I --> J
J --> K[Generate answer]
- Question: Input the query
- Query Prompt: The prompt template used to ask the final question to the LLM
- Keywords Extraction Prompt: The prompt template for extracting keywords from the question
- Template Num: < 0 means disable text2gql; = 0 means no template(zero-shot); > 0 means using the specified number of templates
Query Scope Selection:
- Basic LLM Answer: Does not use RAG functionality
- Vector-only Answer: Uses only vector-based retrieval (queries chunk vectors in the vector database)
- Graph-only Answer: Uses only graph-based retrieval (queries graph vid vectors in the vector database and the graph database)
- Graph-Vector Answer: Uses both graph-based and vector-based retrieval

Execution Flow:
Graph-only Answer:
- Extract keywords from the question using the Keywords Extraction Prompt.

Use the extracted keywords to:
First, perform an exact match in the graph database.
If no match is found, perform a fuzzy match in the vector database (graph vid vector) to retrieve relevant vertices.
text2gql: Call the text2gql-related interface, using the matched vertices as entities to convert the question into a Gremlin query and execute it in the graph database.
BFS: If text2gql fails (LLM-generated queries might be invalid), fall back to executing a graph query using a predefined Gremlin query template (essentially a BFS traversal).
Vector-only Answer:
Sorting and Answer Generation:
After executing the retrieval, sort the search (retrieval) results to construct the final prompt.
Generate answers based on different prompt configurations and display them in different output fields:
- Basic LLM Answer
- Vector-only Answer
- Graph-only Answer
- Graph-Vector Answer

4. (Processing Flow) Text2Gremlin
Converts natural language queries into Gremlin queries.
This module consists of two parts:
- Build Vector Template Index (Optional): Vectorizes query/gremlin pairs from sample files and stores them in the vector database for reference when generating Gremlin queries.
- Natural Language to Gremlin: Converts natural language queries into Gremlin queries.
The first part is straightforward, so the focus is on the second part.

graph TD;
A[Gremlin Pairs File] --> C[Vectorize query]
C --> D[Store in Vector Database]
F[Natural Language Query] --> G[Search for the most similar query \nin the Vector Database \n(If no Gremlin pairs exist in the Vector Database, \ndefault files will be automatically vectorized) \nand retrieve the corresponding Gremlin]
G --> H[Add the matched pair to the prompt \nand use LLM to generate the Gremlin \ncorresponding to the Natural Language Query]
- Natural Language Query: Input the natural language text to be converted into Gremlin.

- Schema: Input the graph schema.
Execution Flow:
Input the query (natural language) into the Natural Language Query field.
Input the graph schema into the Schema field.
Click the Text2Gremlin button, and the following execution logic applies:
Convert the query into a vector.
Construct the prompt:
- Retrieve the graph schema.
- Query the vector database for example vectors, retrieving query-gremlin pairs similar to the input query (if the vector database lacks examples, it automatically initializes with examples from the resources folder).

- Generate the Gremlin query using the constructed prompt.
Input Gremlin queries to execute corresponding operations.