4.8 KiB
🤖 MAID Proxy
MAID Proxy is a flexible Flask application that acts as a proxy for various AI providers (OpenAI, Ollama, local models, etc.). It allows you to manage access to AI models using time-limited keys, making it suitable for educational or controlled environments. This project is designed to be easy to set up and use. It includes an admin dashboard for managing keys and supports multiple AI backends.
Getting Started
1. Set up environment
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
2. Configure .env
Create a .env file with:
# Admin credentials
ADMIN_USERNAME=admin
ADMIN_PASSWORD=yourStrongPassword
# AI Provider Selection (openai, ollama, local, lorem)
AI_PROVIDER=ollama
# PostgreSQL Database Configuration
DATABASE_URL=postgresql://username:password@localhost:5432/dbname
# OpenAI Configuration (when AI_PROVIDER=openai)
OPENAI_API_KEY=your-openai-api-key
OPENAI_MODEL=gpt-3.5-turbo
# Ollama Configuration (when AI_PROVIDER=ollama)
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama2
OLLAMA_TEMPERATURE=0.7
OLLAMA_TIMEOUT=60
# Generic Local Model Configuration (when AI_PROVIDER=local)
LOCAL_MODEL_URL=http://localhost:5000
LOCAL_MODEL_API_PATH=/v1/completions
LOCAL_MODEL_NAME=local-model
3. Initialize Database
Create your PostgreSQL database and initialize the schema:
python init_database.py
Supported AI Providers
1. Ollama (Recommended for local development)
Dedicated provider for Ollama.
Quick Start:
- Install Ollama: ollama.ai
- Pull a model:
ollama pull llama3.1(ormistral,codellama, etc.) - Run the model:
ollama run llama3.1 - Configure in
.env:AI_PROVIDER=ollama OLLAMA_URL=http://localhost:11434 # Ollama API endpoint OLLAMA_MODEL=llama3.1 # Model to use OLLAMA_TEMPERATURE=0.7 # Response creativity (0.0-1.0) OLLAMA_TIMEOUT=60 # Request timeout in seconds - Restart your Flask app
2. OpenAI
Uses the OpenAI API for generating responses.
- Requires API key from OpenAI
- Set
AI_PROVIDER=openaiin.env - Configure
OPENAI_API_KEYandOPENAI_MODEL
3. Generic Local Models
For other local model servers (llama.cpp, etc.)
- Set
AI_PROVIDER=localin.env - Configure
LOCAL_MODEL_URL,LOCAL_MODEL_API_PATH, andLOCAL_MODEL_NAME
4. Lorem Ipsum (Testing)
Generates random Lorem Ipsum text for testing purposes.
- No external dependencies
- Set
AI_PROVIDER=loremin.env
API Endpoints
POST /ask
Send prompt and user input to the model (requires valid proxy key):
{
"proxy_key": "student123",
"system_role": "You are a helpful assistant.",
"prompt": "What's the weather like in Stockholm?"
}
GET /admin
Admin dashboard (requires basic auth):
- View existing keys
- Add/remove time-window access keys
- The admin UI is protected with HTTP Basic Auth (
ADMIN_USERNAME/ADMIN_PASSWORD)
Development
To run locally:
python app.py
Testing the API
# First, add a test key via the admin panel at http://localhost:5000/admin
# Then test with curl:
curl -X POST http://localhost:5000/ask \
-H "Content-Type: application/json" \
-d '{
"proxy_key": "your-test-key",
"system_role": "You are a helpful assistant.",
"prompt": "Hello, how are you?"
}'
Adding Custom Providers
You can create custom providers by:
- Create a new class inheriting from
BaseProvider - Implement the
generate_responsemethod - Register it with the
ProviderFactory
Example:
from providers import BaseProvider, ProviderFactory
class MyCustomProvider(BaseProvider):
def generate_response(self, system_role: str, prompt: str) -> Dict[str, Any]:
# Your implementation here
return {
"response": "Generated text",
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}
# Register the provider
ProviderFactory.register_provider("custom", MyCustomProvider)
Then use it by setting AI_PROVIDER=custom in your .env file.
Production Deployment
With systemd:
Create a maid.service:
[Unit]
Description=Flask MAID app
After=network.target
[Service]
User=youruser
Group=www-data
WorkingDirectory=/home/youruser/maid
Environment="PATH=/home/youruser/maid/.venv/bin"
ExecStart=/home/youruser/maid/.venv/bin/gunicorn -w 4 -b 0.0.0.0:8000 wsgi:app
Restart=always
[Install]
WantedBy=multi-user.target
Then enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable maid
sudo systemctl start maid
Maintainers
Created by @nenzen.
Feel free to fork and adapt.