The solution was built as a conversational AI assistant designed to handle natural language queries for timesheet management and project tracking. At its core, it processes user requests through natural language understanding (NLU), executes operations via specialized tools, and provides intelligent responses based on user roles and permissions.
To deliver intelligent, real-time responses, the system follows a carefully orchestrated pipeline:
User Query Processing
Users interact through the Next.js web interface, with real-time chat powered by Server-Sent Events (SSE) for instant communication.
Natural Language Understanding
The user input is processed by a large language model (LLM), which extracts intent and determines the appropriate tools to execute.
Tool Execution & Data Retrieval
The MCP Host Service executes specialized tools against the Tuix Timesheets REST API
Response Generation & Streaming
The LLM formulates a response based on the tool results, streaming the response back to the user in real-time.
Conversation History Management
All interactions are stored in Redis for context preservation and session management across multiple conversations.
This modular flow ensures low latency, scalability, and a seamless experience across different user roles and complex queries.
Feature Highlights
-
Multilingual Conversations: Engages with users across languages, maintaining context throughout the interaction.
-
AI-Powered Understanding: Identifies user intent and executes appropriate timesheet operations with precision.
-
Automated Operations: Transforms natural language requests into system actions for timesheet management, project tracking, and analytics.
-
Role-Based Intelligence: Adapts responses and capabilities based on user permissions—employees see personal data, admins access full system management.
Challenges & Lessons Learned
-
Tool Synchronization: Keeping the MCP server tools in sync with constantly updated Swagger documentation was essential for maintaining accurate API operations.
-
Conversational Context: Beyond understanding queries, the assistant needed to maintain context across multiple interactions while respecting user roles and permissions.
-
Finding the Right Balance: Automation delivers speed and efficiency, but ensuring proper data security and role-based access was key to building trust and reliability.
Technical Stack
Backend Services (Go & TypeScript)
-
MCP Server: Auto-generated from Swagger documentation, ensuring tools stay synchronized with API changes
-
MCP Host Service: Node.js/TypeScript service managing LLM interactions and tool execution
-
RAG Service: Go-based service with vector database for knowledge retrieval
-
Redis: Conversation history storage and session management
AI Infrastructure
-
Multi-Provider LLM Support: Google AI (Gemini), OpenAI (ChatGPT), Anthropic (Claude)
-
Real-time Streaming: Server-Sent Events for instant response delivery
-
Tool Integration: 50+ specialized tools for timesheet operations
Frontend (Next.js)
-
Modern React Interface: TypeScript-based chat interface with real-time updates
-
Auth0 Integration: Secure user authentication and role management
-
Responsive Design: Optimized for desktop and mobile interactions
Looking to improve your application management with AI-driven automation? Let's discuss how a custom AI assistant can be tailored for your business needs.


