Aurelis is an enterprise-grade AI code assistant that exclusively uses GitHub models via Azure AI Inference. This guide will get you up and running in minutes.
# Install from PyPI (recommended)
pip install aurelis-cli
# Or install from source
git clone https://github.com/kanopusdev/aurelis.git
cd aurelis
pip install -e .
Aurelis uses GitHub models exclusively through Azure AI Inference. You need a GitHub token with model access.
# Linux/macOS
export GITHUB_TOKEN="your_github_token_here"
# Windows PowerShell
$env:GITHUB_TOKEN="your_github_token_here"
# Windows Command Prompt
set GITHUB_TOKEN=your_github_token_here
# Initialize configuration
aurelis init
# Verify setup
aurelis models
# List available GitHub models
aurelis models
# Analyze a Python file
aurelis analyze script.py
# Generate code from description
aurelis generate "Create a FastAPI endpoint for user authentication"
# Start interactive shell
aurelis shell
The Aurelis shell provides the most powerful experience:
aurelis shell
Inside the shell, you can use these commands:
# Model management
models # List available models
health # Check model connectivity
config # View configuration
# Code operations
analyze file.py # Analyze code
generate "description" # Generate code
explain "code snippet" # Explain code
fix file.py # Fix issues
refactor file.py # Optimize code
# Documentation
docs file.py # Generate docs
test file.py # Create tests
# Session management
session save name # Save session
session load name # Load session
history # Command history
help # Show all commands
exit # Exit shell
Create a .aurelis.yaml
file in your project root:
# GitHub Models Configuration
github_token: "${GITHUB_TOKEN}" # Use environment variable
models:
primary: "codestral-2501" # Primary model for code tasks
fallback: "gpt-4o-mini" # Fallback model for reliability
analysis:
max_file_size: "1MB"
chunk_size: 3500 # Optimized for 4K context models
overlap_ratio: 0.15
processing:
max_retries: 3
timeout: 60
concurrent_requests: 5
security:
audit_logging: true
secure_token_storage: true
cache:
enabled: true
ttl: 3600 # 1 hour
max_size: 1000
Aurelis intelligently routes tasks to the best GitHub model:
Test your setup with these commands:
# Check configuration
aurelis config
# Verify token and connectivity
aurelis models
# Test code analysis
echo "def hello(): print('world')" > test.py
aurelis analyze test.py
# Test code generation
aurelis generate "Python function to calculate fibonacci"
# Test shell
aurelis shell
# Verify token is set
echo $GITHUB_TOKEN # Linux/macOS
echo $env:GITHUB_TOKEN # Windows PowerShell
# Set token if missing
export GITHUB_TOKEN="your_token_here"
# Check configuration
aurelis config
# Recreate configuration
aurelis init --force
# Check model health
aurelis models
# Verify network connectivity
curl -H "Authorization: Bearer $GITHUB_TOKEN" \
https://models.inference.ai.azure.com/health
# Reinstall dependencies
pip install --upgrade aurelis
# Or install from source
pip install -e .
# CLI help
aurelis --help
# Command-specific help
aurelis analyze --help
aurelis generate --help
# Shell help
aurelis shell
> help
For enterprise setups, see:
Now that you have Aurelis running:
Need Help?
Ready to supercharge your development with AI? Let’s code! 🚀