8.7 KiB
AI-Generated Rules Configuration Guide
This guide provides step-by-step instructions for configuring and deploying the AI-generated rules feature in the Email Organizer application.
Prerequisites
System Requirements
- Python 3.8+
- Flask application with existing user authentication
- PostgreSQL database (SQLite for development)
- Internet connectivity for AI service access
AI Service Requirements
- OpenAI-compatible API endpoint
- Valid API key with sufficient quota
- Model access (GPT-3.5-turbo recommended)
Configuration Steps
1. Environment Variables
Add the following environment variables to your .env file:
# AI Service Configuration
AI_SERVICE_URL=https://api.openai.com/v1
AI_SERVICE_API_KEY=your-openai-api-key-here
AI_MODEL=gpt-3.5-turbo
AI_TIMEOUT=30
AI_MAX_RETRIES=3
AI_CACHE_TTL=3600
# Feature Configuration
AI_FEATURE_ENABLED=true
AI_CACHE_ENABLED=true
AI_FALLBACK_ENABLED=true
2. Database Migration
Create and run the database migration for the AI rule cache table:
# Generate migration
flask db migrate -m "Add AI rule cache table"
# Apply migration
flask db upgrade
3. Application Configuration
Update your config.py file to include AI service configuration:
class Config:
# Existing configuration...
# AI Service Configuration
AI_SERVICE_URL = os.environ.get('AI_SERVICE_URL')
AI_SERVICE_API_KEY = os.environ.get('AI_SERVICE_API_KEY')
AI_MODEL = os.environ.get('AI_MODEL', 'gpt-3.5-turbo')
AI_TIMEOUT = int(os.environ.get('AI_TIMEOUT', 30))
AI_MAX_RETRIES = int(os.environ.get('AI_MAX_RETRIES', 3))
AI_CACHE_TTL = int(os.environ.get('AI_CACHE_TTL', 3600))
# Feature Flags
AI_FEATURE_ENABLED = os.environ.get('AI_FEATURE_ENABLED', 'true').lower() == 'true'
AI_CACHE_ENABLED = os.environ.get('AI_CACHE_ENABLED', 'true').lower() == 'true'
AI_FALLBACK_ENABLED = os.environ.get('AI_FALLBACK_ENABLED', 'true').lower() == 'true'
4. Service Integration
The AI service is automatically integrated into the existing folder creation workflow. No additional configuration is required for the basic functionality.
Testing the Configuration
1. Unit Testing
Run the AI service unit tests:
python -m pytest tests/unit/test_ai_service.py -v
2. Integration Testing
Test the API endpoints:
python -m pytest tests/integration/test_ai_rule_endpoints.py -v
3. Functional Testing
Test the complete user flow:
python -m pytest tests/functional/test_ai_rule_user_flow.py -v
4. Manual Testing
- Start the application:
flask run --port=5000
- Open your browser and navigate to the application
- Click "Add New Folder"
- Test the AI rule generation buttons:
- "Generate Rule" - creates a single rule
- "Multiple Options" - creates multiple rule choices
- Verify that rules appear with quality scores
- Test the "Use This Rule" and "Copy" functionality
Troubleshooting
Common Issues
1. AI Service Connection Errors
Symptoms: Rule generation fails with "No response from AI service"
Solutions:
- Verify API key is valid and has sufficient quota
- Check network connectivity to AI service endpoint
- Confirm AI service URL is correct
- Check service status: OpenAI Status
Debug Commands:
# Test API connectivity
curl -H "Authorization: Bearer $AI_SERVICE_API_KEY" $AI_SERVICE_URL/models
# Check API key format
echo $AI_SERVICE_API_KEY | wc -c # Should be 51 characters for OpenAI
2. Rate Limiting Issues
Symptoms: "Rate limit exceeded" errors
Solutions:
- Monitor API usage and quotas
- Implement request throttling if needed
- Consider upgrading to a higher-tier API plan
- Enable caching to reduce API calls
Monitoring:
-- Check cache hit rate
SELECT
COUNT(*) as total_requests,
COUNT(CASE WHEN cache_key IS NOT NULL THEN 1 END) as cached_requests,
ROUND(COUNT(CASE WHEN cache_key IS NOT NULL THEN 1 END) * 100.0 / COUNT(*), 2) as cache_hit_rate
FROM ai_rule_cache
WHERE created_at > NOW() - INTERVAL '1 day';
3. Database Issues
Symptoms: Cache not working or database errors
Solutions:
- Verify database permissions
- Check table creation
- Monitor cache expiration
- Clear cache if needed
Debug Commands:
-- Check cache table status
SELECT COUNT(*) as total_cache_entries,
COUNT(CASE WHEN expires_at > NOW() THEN 1 END) as active_cache_entries,
COUNT(CASE WHEN expires_at <= NOW() THEN 1 END) as expired_cache_entries
FROM ai_rule_cache;
-- Clear expired cache entries
DELETE FROM ai_rule_cache WHERE expires_at <= NOW();
4. UI Issues
Symptoms: AI controls not appearing or not working
Solutions:
- Verify feature flag is enabled
- Check template rendering
- Test JavaScript functionality
- Verify HTMX configuration
Debug Steps:
- Open browser developer tools
- Check for JavaScript errors in console
- Verify HTMX requests are being made
- Check network responses for AI endpoints
Performance Optimization
1. Caching Optimization
-- Create indexes for better cache performance
CREATE INDEX idx_ai_rule_cache_user_folder ON ai_rule_cache(user_id, folder_name, folder_type);
CREATE INDEX idx_ai_rule_cache_expires ON ai_rule_cache(expires_at);
CREATE INDEX idx_ai_rule_cache_key ON ai_rule_cache(cache_key);
2. Connection Pooling
Configure connection pooling in your database settings for better performance under load.
3. Rate Limiting
Implement rate limiting to prevent abuse:
# Add to your Flask app configuration
RATELIMIT_STORAGE_URL = 'memory://'
RATELIMIT_DEFAULT = "100 per hour"
Security Considerations
1. API Key Security
- Store API keys securely using environment variables
- Rotate API keys regularly
- Monitor API usage for suspicious activity
- Use least privilege principle for API access
2. Input Validation
The system includes comprehensive input validation:
- Folder name validation (length, characters)
- Rule text validation (format, length)
- Folder type validation (enum values)
3. Output Sanitization
AI responses are sanitized before storage:
- HTML tag removal
- Script injection prevention
- Content length validation
Monitoring and Maintenance
1. Health Checks
Set up regular health checks:
# Monitor AI service availability
curl -f $AI_SERVICE_URL/models || echo "AI service unavailable"
# Monitor database connectivity
psql $DATABASE_URL -c "SELECT 1;" || echo "Database unavailable"
2. Log Monitoring
Monitor logs for errors and performance issues:
# Check for AI service errors
tail -f app.log | grep "AI service"
# Monitor performance
tail -f app.log | grep "generate-rule"
3. Regular Maintenance
- Clean up expired cache entries weekly
- Monitor API usage and quotas
- Review error logs regularly
- Update AI models as new versions become available
Backup and Recovery
1. Database Backup
Include the AI rule cache table in your regular backup strategy:
# Backup command example
pg_dump $DATABASE_URL > backup_$(date +%Y%m%d).sql
2. Configuration Backup
Backup your environment configuration:
# Copy environment variables
cp .env .env.backup
3. Recovery Procedures
Cache Recovery:
-- Restore from backup if needed
-- Recreate cache entries from usage patterns
Service Recovery:
- Verify AI service status
- Check API credentials
- Test rule generation
- Monitor for errors
Scaling Considerations
1. Horizontal Scaling
- Use a distributed cache for multi-instance deployments
- Implement session affinity if needed
- Consider read replicas for database scaling
2. Vertical Scaling
- Increase memory for caching
- Optimize database connections
- Monitor CPU usage for AI processing
3. Load Testing
Test with simulated load:
# Example load testing command
locust -f locustfile.py --users 50 --spawn-rate 5 --run-time 5m
Support and Resources
Documentation
Community Support
- GitHub Issues: Report bugs and request features
- Documentation: Contribute improvements
- Discussions: Share best practices
Professional Support
For enterprise deployments, consider:
- AI service provider support
- Database administration support
- Security consulting
This configuration guide provides everything needed to successfully deploy and maintain the AI-generated rules feature. For additional questions or issues, please refer to the troubleshooting section or contact the development team.