Zum Hauptinhalt springen

Installation & Requirements

This guide will help you set up and deploy your own AnonDocs instance. AnonDocs is designed to be self-hosted, giving you complete control over your data and privacy.

System Requirements

Minimum Requirements

  • CPU: 2 cores
  • RAM: 4GB (8GB recommended)
  • Storage: 5GB free space
  • Node.js: v18 or higher
  • Operating System: Linux, macOS, or Windows (WSL recommended for Windows)
  • CPU: 4+ cores
  • RAM: 16GB or more
  • Storage: 20GB+ SSD
  • GPU: Optional but recommended for faster LLM processing (supports CUDA)

Installation Methods

The easiest way to get started with AnonDocs is using Docker.

Quick Start with Docker Compose

# Clone the repository
git clone https://github.com/AI-SmartTalk/AnonDocs.git
cd AnonDocs

# Copy environment file
cp .env.example .env

# Edit .env with your configuration
nano .env

# Start with Docker Compose
docker-compose up -d

The service will be available at http://localhost:3000.

Manual Docker Setup

# Build the image
docker build -t anondocs .

# Run the container
docker run -d \
--name anondocs \
-p 3000:3000 \
--env-file .env \
anondocs

Option 2: Node.js Direct Installation

For development or custom deployments:

# Clone the repository
git clone https://github.com/AI-SmartTalk/AnonDocs.git
cd AnonDocs

# Install dependencies
npm install

# Copy environment file
cp .env.example .env

# Edit configuration
nano .env

# Build TypeScript
npm run build

# Start the server
npm start

Option 3: Kubernetes

For production Kubernetes deployments, see the k8s/ directory in the repository for example manifests.

# Apply Kubernetes configurations
kubectl apply -f k8s/

Initial Setup

1. Environment Configuration

Create a .env file based on .env.example:

cp .env.example .env

Key configuration variables:

# Server Configuration
PORT=3000

# LLM Provider (ollama or openai)
DEFAULT_LLM_PROVIDER=ollama

# Ollama Configuration (if using Ollama)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=mistral-nemo

# OpenAI-compatible API (if using vLLM, LM Studio, LocalAI)
OPENAI_BASE_URL=http://localhost:8000/v1
OPENAI_MODEL=mistralai/Mistral-7B-Instruct-v0.2
OPENAI_API_KEY=not-required

# Processing Configuration
CHUNK_SIZE=1500
CHUNK_OVERLAP=0
ENABLE_PARALLEL_CHUNKS=false

2. Install LLM Provider

Before running AnonDocs, you need to set up an LLM provider. See LLM Provider Setup for detailed instructions.

Quick Ollama Setup:

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a recommended model
ollama pull mistral-nemo
# or
ollama pull llama3.1

3. Verify Installation

# Check if server is running
curl http://localhost:3000/health

# Expected response: {"status":"ok"}

Next Steps

Troubleshooting

Port Already in Use

# Find and kill process on port 3000
lsof -ti:3000 | xargs kill -9

# Or change PORT in .env

Ollama Connection Failed

  • Ensure Ollama is running: ollama serve
  • Check base URL: http://localhost:11434
  • Verify model is pulled: ollama list

Dependencies Installation Issues

# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install

Development Mode

For development with auto-reload:

npm run dev

The server will automatically restart when you make code changes.

Proudly made byAI SmartTalkAI SmartTalk