Humile - Translator Agent
Overview
Humile is the agent responsible for translation and response normalization in the TKM AI Agency Platform. It ensures consistent communication across different languages by handling message translation, language detection, and response formatting for all other agents in the system.
Directory Structure
Backend/CRM/Humile/
├── data/ # Response logs and data storage
├── humile.py # Main agent implementation
├── api_humile.py # FastAPI endpoints
├── tools.py # Chat and translation utilities
├── tools_schema.py # Data models and schemas
└── tools_definitions.py # Constants and definitions
Main Components
HumileAgent Class
The main class that handles chat interface operations:
- LLM provider integration (Groq)
- Message translation
- Response normalization
- User configuration management
Processing Pipeline
- Message reception
- Source agent identification
- Language detection/configuration
- Response processing/translation
- Message normalization
- Delivery to conversation
Key Features
Message Processing
- Multi-language support
- Response normalization
- Source agent handling
- User preferences
Translation Capabilities
- Language detection
- Target language configuration
- Translation services
- Fallback handling
Configuration Management
- User language preferences
- Timezone settings
- LLM configurations
- Agent-specific settings
API Operations
Normalized Response Request
# Request Format
{
"conversation_id": str,
"user_id": str,
"organization_id": str,
"normalized_response": bool,
"data": dict
}
# Response Format
{
"success": bool,
"data": dict,
"metadata": {
"processed_at": str,
"source_agent": str,
"target_language": str
}
}
Integration
LLM Integration
- Groq API integration
- Model configuration
- Temperature settings
- Token management
Agent Communication
- Atta: Message storage and retrieval
- Prometheus: System responses
- Carina: Search responses
- Scalaris/Hova/Orion: Normalized responses
Performance Features
Response Optimization
- Asynchronous processing
- Timeout handling
- Response caching
- Error recovery
Configuration Caching
- User preferences cache
- Language settings
- Timezone information
- Performance optimization
Error Handling
- Translation errors
- LLM failures
- Configuration issues
- Timeout management
- Comprehensive logging
Data Models
Message Format
{
"timestamp": str,
"source_agent": str,
"type": str,
"content": {
"original_input": str,
"processed_content": dict
},
"tokens": dict
}
User Configuration
{
"user": {
"default_language": str,
"timezone": str
},
"llm": {
"provider": str,
"model": str,
"temperature": float,
"max_tokens": int
}
}
Language Support
- Default language configuration
- User-specific language settings
- Automatic language detection
- Translation quality control
Future Enhancements
- Additional LLM providers
- Enhanced translation capabilities
- Advanced response formatting
- Real-time translation features