Consciousness-Enhanced Knowledge Retrieval System
Flux Server is a standalone microservice that provides natural language query capabilities over knowledge graphs, enhanced with consciousness-based navigation and active inference.
What it does:
- Natural language queries over your knowledge base
- Consciousness-enhanced search using CLAUSE (attractor basins, thoughtseeds)
- Active inference-based response synthesis
- Graph + vector + full-text unified search in Neo4j
What it doesn't do:
- Document ingestion (that's Dionysus)
- Knowledge graph building (that's Dionysus)
- File processing (that's Dionysus)
Architecture:
User/Frontend
↓
Flux Server :9127
├─→ Query Engine
├─→ Neo4j Searcher
├─→ Response Synthesizer
└─→ CLAUSE Navigator
↓
Neo4j + Redis
System Requirements:
- macOS, Linux, or Windows
- Python 3.11+
- 8GB RAM minimum (16GB recommended)
- 10GB disk space
Required Services (installed natively):
- Neo4j - Knowledge graph database
- Redis - Caching and real-time data
- Ollama - Local LLM for response synthesis
# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Neo4j
brew install neo4j
# Install Redis
brew install redis
# Install Ollama
brew install ollama
# Install Python 3.11 (if needed)
brew install python@3.11# Start Neo4j
brew services start neo4j
# Wait for Neo4j to start (check http://localhost:7474)
# Default credentials: neo4j/neo4j (change on first login)
# Start Redis
brew services start redis
# Start Ollama and pull model
ollama serve &
ollama pull llama2- Open Neo4j Browser: http://localhost:7474
- Login with default credentials:
neo4j/neo4j - Set a secure password (update
.envwithNEO4J_PASSWORD=your_password) - Verify connection
# Clone or copy Flux Server
cd /path/to/flux-server
# Create virtual environment
python3.11 -m venv flux-env
# Activate virtual environment
source flux-env/bin/activate # macOS/Linux
# OR
flux-env\Scripts\activate # Windows
# Install Python dependencies
pip install -r requirements.txt# Copy environment template
cp .env.example .env
# Edit .env file (update NEO4J_PASSWORD if you changed it)
nano .envKey settings to verify:
NEO4J_PASSWORD=your_secure_password # Match what you set in Neo4j
PORT=9127
OLLAMA_MODEL=llama2
# Activate virtual environment (if not already active)
source flux-env/bin/activate
# Start server
uvicorn src.app_factory:app --host 0.0.0.0 --port 9127 --reloadExpected output:
INFO: Uvicorn running on http://0.0.0.0:9127
INFO: Started reloader process
INFO: Started server process
INFO: Application startup complete.
# Test health endpoint
curl http://localhost:9127/health
# Expected: {"status":"ok","service":"flux-server","version":"1.0.0"}
# Test root endpoint
curl http://localhost:9127/
# Test query endpoint (requires existing knowledge)
curl -X POST http://localhost:9127/api/query \
-H "Content-Type: application/json" \
-d '{"question": "What is consciousness?"}'import requests
response = requests.post(
"http://localhost:9127/api/query",
json={
"question": "What is active inference?",
"user_id": "optional-user-id",
"context": {}
}
)
result = response.json()
print(f"Answer: {result['answer']}")
print(f"Sources: {len(result['sources'])} documents")
print(f"Confidence: {result['confidence']}")# Navigate knowledge graph with consciousness guidance
response = requests.post(
"http://localhost:9127/api/clause/navigate",
json={
"query": "explore consciousness and emergence",
"budget": {"token": 1000, "time": 30},
"constraints": {"max_depth": 3}
}
)
path = response.json()
print(f"Path taken: {path['path']}")
print(f"Insights: {path['insights']}")Natural language query processing
Request:
{
"question": "What is consciousness?",
"user_id": "optional",
"context": {},
"thoughtseed_id": "optional"
}Response:
{
"response_id": "uuid",
"query_id": "uuid",
"answer": "Synthesized answer...",
"sources": [...],
"confidence": 0.85,
"processing_time_ms": 1234,
"thoughtseed_trace": {...}
}Consciousness-enhanced graph navigation
Request:
{
"query": "explore topic",
"budget": {"token": 1000, "time": 30},
"constraints": {"max_depth": 3}
}Health check
Response:
{
"status": "ok",
"service": "flux-server",
"version": "1.0.0"
}See .env.example for all configuration options.
Critical settings:
NEO4J_PASSWORD: Must match your Neo4j passwordPORT: Default 9127, change if port is in useOLLAMA_MODEL: LLM model for synthesis (llama2, mistral, etc.)
For large knowledge bases (>10k documents):
- Increase Neo4j memory in
neo4j.conf:dbms.memory.heap.initial_size=2G dbms.memory.heap.max_size=4G
For faster queries:
- Use faster Ollama model (e.g.,
phi) - Adjust
CONSCIOUSNESS_DETECTION_THRESHOLD(higher = faster but less accurate)
# Check if Neo4j is running
brew services list | grep neo4j
# Restart Neo4j
brew services restart neo4j
# Check Neo4j Browser: http://localhost:7474# Check if Redis is running
brew services list | grep redis
# Restart Redis
brew services restart redis
# Test Redis connection
redis-cli ping # Should return "PONG"# Pull the model
ollama pull llama2
# List available models
ollama list
# Test Ollama
curl http://localhost:11434/api/tags# Ensure virtual environment is activated
source flux-env/bin/activate
# Reinstall dependencies
pip install -r requirements.txt --force-reinstall# Activate virtual environment
source flux-env/bin/activate
# Run tests
pytest tests/
# With coverage
pytest tests/ --cov=src# Format code
black src/
# Lint
flake8 src/-
Use production WSGI server (Gunicorn):
pip install gunicorn gunicorn src.app_factory:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:9127
-
Set up systemd service (Linux):
[Unit] Description=Flux Server After=network.target neo4j.service redis.service [Service] User=flux WorkingDirectory=/opt/flux-server ExecStart=/opt/flux-server/flux-env/bin/uvicorn src.app_factory:app --host 0.0.0.0 --port 9127 Restart=always [Install] WantedBy=multi-user.target
-
Configure firewall:
# Allow port 9127 sudo ufw allow 9127/tcp -
Set up reverse proxy (Nginx):
server { listen 80; server_name flux.example.com; location / { proxy_pass http://localhost:9127; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }
-
Query Engine (query_engine.py)
- Orchestrates query processing pipeline
- Manages Neo4j and Ollama integration
-
Neo4j Searcher (neo4j_searcher.py)
- Unified graph + vector + full-text search
- Uses Neo4j's native vector similarity
-
Response Synthesizer (response_synthesizer.py)
- LLM-based answer generation via Ollama
- Context-aware synthesis
-
CLAUSE System (services/clause/)
- Path navigation with consciousness guidance
- Attractor basin tracking
- ThoughtSeed evolution
Query → Query Engine
├→ Neo4j Searcher (retrieve context)
├→ Response Synthesizer (generate answer)
└→ CLAUSE Navigator (consciousness enhancement)
↓
Response
[Add your license here]
Issues: GitHub Issues Documentation: Full Docs Community: [Discord/Forum Link]
- Dionysus: Document ingestion and knowledge graph building
- Frontend: UI for interacting with Flux Server
Version: 1.0.0 Last Updated: 2025-10-05