HAWKI-RAG Documentation Portal
This portal is organized as a guided flow: from prerequisites to deployment, with chapter-by-chapter operational detail.

Overviewβ
HAWKI-RAG is a containerized Retrieval-Augmented Generation platform built to turn crawled website content into usable intelligence. It combines a Laravel operator layer (UI + API) with a FastAPI pipeline for ingestion, retrieval, and optional graph enrichment, so operations stay simple while the backend remains capable.
Crawled Markdown files are processed through the RAG-Anything flow, chunked and embedded with Ollama, indexed in Qdrant for semantic retrieval, and optionally enriched into graph triplets in Neo4j for relation-aware context. The result is a practical knowledge engine that blends speed (vector search) with structure (graph reasoning) in one operational stack.
Read in Orderβ
-
1. Requirements Hardware, software, ports, and platform prerequisites before you run anything. Open chapter
-
2. Setup with Makefile The core operational commands for networking, startup, health checks, and logs. Open chapter
-
3. Architecture Beginner-friendly explanation of the full RAG system and service interactions. Open chapter
-
4. Installation Zero-to-running installation sequence with expected outputs and failure fixes. Open chapter
-
5. Environment, DB, Queue Complete environment variable map, migrations, and queue setup details. Open chapter
-
6. Ingestion and Embeddings End-to-end ingest flow, chunking/embedding behavior, and monitoring. Open chapter
-
7. Commands Catalogue Practical command reference with purpose, expected output, and common fixes. Open chapter
-
8. RagSearcher Triplets Update Interface contract and runtime behavior for triplet-aware retrieval in MCP, Laravel, and Python. Open chapter
-
9. Repository Map Folder-by-folder orientation for Laravel, Python, Docker, volumes, and logs. Open chapter
Quick Startβ
make network
make up-core
make test-services