
Lead Specialist - AI/ML (Data Analytics & AI)
- Singapore
- Permanent
- Full-time
- Analyse diverse, complex healthcare datasets to identify patterns and extract meaningful insights
- Synthesise findings into actionable recommendations and define opportunities for AI-driven solutions
- Design, develop, implement, and validate a range of machine learning and AI models-including but not limited to predictive modelling, classification, clustering, natural language processing, and, where applicable, computer vision techniques-to address specific healthcare challenges and improve patient outcomes or operational efficiencies
- Conduct rigorous validation and testing of AI models to ensure they meet clinical efficacy and regulatory standards
- Effectively communicate data-driven insights, analytical results, and model capabilities to diverse stakeholders (both technical and non-technical) through clear presentations, visualisations, and documentation; contribute to disseminating knowledge via research papers or conference presentations when appropriate
- Design, develop, and deploy Generative AI applications, utilising architectures such as Retrieval Augmented Generation (RAG), Agentic RAG, and Graph RAG. This involves selecting, integrating, and effectively utilising various components, including vector databases (commercial cloud offerings or open-source solutions like Milvus, Weaviate, Qdrant, etc.), state-of-the-art embedding models, and Large Language Models (LLMs) accessible via cloud provider APIs or open-source
- Build and implement agentic AI workflows capable of complex task execution, reasoning, and dynamic interaction with diverse data sources, APIs, and tools to address multifaceted healthcare challenges. This includes experience with frameworks such as LangChain, LlamaIndex, OpenAI Agent SDK, LangGraph, or Microsoft Semantic Kernel.
- Design, implement, and manage robust CI/CD pipelines for the automated build, testing, and deployment of Generative AI applications and their backend services (e.g., using GitHub Actions, GitLab CI, AWS CodePipeline, Azure DevOps)
- Develop and maintain scalable and efficient backend APIs (i.e. FastAPI) to serve as the interface for these Generative AI applications and integrate them with existing infrastructure
- Implement MLOps practices for managing the lifecycle of components within RAG and agentic systems, including versioning prompts, configurations, knowledge bases, and endpoints
- Containerize applications (e.g., using Docker) and manage deployments (e.g., using Kubernetes, AWS ECS, and Azure App Service) to ensure high availability and scalability
- Implement comprehensive monitoring, logging, and alerting for deployed applications to track performance and usage
- Collaborate effectively with clinical experts, researchers, and other stakeholders to understand intricate healthcare problems and translate them into well-defined data science projects and deployable AI solutions
- Develop strategies to identify, acquire, and use appropriate data sets to develop practical solutions and support decision making
- A Master's or Bachelor's degree in Computer Science, Artificial Intelligence, Software Engineering, or a closely related technical field
- Minimum of 3-5 years of experience deploying production-grade Generative AI applications, with a portfolio demonstrating RAG systems, agentic AI solutions, or cloud LLM integrations
- Experience with major cloud AI platforms and their generative AI/search services/components (e.g., Azure AI Search, Azure AI Foundry, Google Vertex, AWS Kendra, AWS Bedrock, OpenSearch)
- Expert Python proficiency required. Extensive experience with GenAI application frameworks (e.g., LangChain, LlamaIndex, OpenAI SDK, LangGraph, Semantic Kernel) is highly desirable; familiarity with deep learning frameworks (PyTorch, TensorFlow) is beneficial
- Solid experience building and deploying robust, scalable backend APIs (preferably FastAPI); asynchronous programming experience is a strong advantage
- Required experience designing and implementing CI/CD pipelines for application deployment (e.g., GitHub Actions, GitLab CI, AWS CodePipeline, Azure DevOps) using Git-based workflows; understanding of MLOps for LLM application components (prompts, configs, knowledge bases) is essential
- Proficiency with SQL/NoSQL databases expected; experience with vector databases (e.g., Pinecone, Weaviate, Milvus) and embedding models is highly preferred and a significant advantage
- Solid Docker experience required; familiarity with container orchestration (Kubernetes, ECS, App Service) is a plus.
- Excellent analytical, creative problem-solving, and communication skills, with proven ability to work independently, manage priorities, own projects end-to-end, and collaborate effectively in a fast-paced environment