Key Responsibilities:
. Design, develop, and maintain applications in Python.
. Implement RAG pipelines by integrating LLMs (OpenAI, Azure OpenAI, Hugging Face, LangChain, LlamaIndex, etc.) with enterprise and external data sources.
. Develop MCP-based integrations to connect tools, APIs, and enterprise data systems with LLMs.
. Build APIs and microservices for AI-powered search, summarization, and conversational AI.
. Create document ingestion pipelines (PDFs, databases, SharePoint, etc.) and manage embeddings with vector databases (Pinecone, Weaviate, FAISS, Qdrant, Azure Cognitive Search, etc.).
. Collaborate with AI engineers, architects, and data teams to ensure scalable deployment of RAG/MCP solutions.
. Optimize application performance, security, and scalability for production-grade AI systems.
. Stay updated with AI frameworks, MCP standards, and cloud AI services.
Required Skills & Experience:
. Minimum of 8 years of IT experience with 1+ years of AI experience
. Strong hands-on experience in Python.
. Solid understanding of OOP, REST APIs, and microservices architecture.
. Proven experience with LLM-based applications and RAG (Retrieval-Augmented Generation) integration.
. Knowledge and practical implementation of Model Context Protocol (MCP) for AI tool orchestration.
. Familiarity with vector databases (FAISS, Pinecone, Weaviate, Qdrant, Azure Cognitive Search).
. Hands-on experience with LangChain, LlamaIndex, Hugging Face Transformers, or similar AI libraries.
. Strong problem-solving and cross-functional collaboration skills.