Prometheus - General Chat Agent
Overview
Prometheus is the General Chat agent in the TKM AI Agency Platform, responsible for handling general conversations with users. It processes user messages, maintains conversation context, and generates appropriate responses using advanced language models.
Directory Structure
Backend/CRM/Prometheus/
├── data/ # Conversation logs and data
├── prometheus.py # Main agent implementation
├── tools.py # Chat processing utilities
├── tools_schema.py # Data models and schemas
└── tools_definitions.py # Constants and definitions
Main Components
PrometheusAgent Class
The core component that handles chat operations:
- Message processing
- Conversation management
- Response generation
- Context handling
Processing Pipeline
-
Message Reception
- User message intake
- Language detection
- Context retrieval
- Input preprocessing
-
Response Processing
- Context analysis
- Intent understanding
- Response generation
- Language model integration
-
Conversation Management
- Context updating
- History tracking
- Session management
- Response delivery
API Operations
Process Conversation
- Endpoint:
/process_conversation
- Method: POST
- Purpose: Processes user messages and generates responses
- Request Format:
{ "conversation_id": "chat_identifier", "user_id": "user_identifier", "message": "User message content", "atta_context": {}, "user_language": "en" }
- Response Format:
{ "response": "Generated response", "conversation_id": "chat_identifier", "metadata": { "timestamp": "2024-01-15T12:00:00Z", "language": "en", "model_used": "groq-model" } }
Key Features
-
Chat Processing
- Natural language understanding
- Context-aware responses
- Multi-language support
- Conversation flow management
-
Model Integration
- LLM provider integration (Groq, OpenAI)
- Response generation
- Model configuration
- Temperature control
-
Context Management
- Conversation history
- User preferences
- Session tracking
- State management
Integration
Platform Integration
- Interfaces with other CRM agents
- Event-based communication
- Context sharing
- Response coordination
Model Providers
- Groq integration
- OpenAI support
- Model configuration
- API management
Error Handling
-
Message Processing
- Input validation
- Language detection errors
- Context retrieval issues
- Recovery procedures
-
Response Generation
- Model errors
- Token limits
- Fallback responses
- Error notifications
Performance Features
-
Optimization
- Response caching
- Context pruning
- Resource management
- Token optimization
-
Configuration
- Model selection
- Temperature settings
- Token limits
- Provider settings
Data Models
Message Format
{
"message_id": str,
"content": str,
"user_id": str,
"conversation_id": str,
"timestamp": datetime,
"metadata": dict
}
Conversation Context
{
"conversation_id": str,
"history": list,
"user_preferences": dict,
"current_context": dict,
"metadata": dict
}