Back to Jobs

AI / GenAI / MLOps Engineer

Not Disclosed

Job Description & Details

The AI and Generative AI space is exploding, and companies are scrambling to move from experimental prototypes to production‑grade systems. Professionals who can design, deploy, and maintain scalable GenAI pipelines are in high demand right now. This contract role offers a chance to lead real‑world AI initiatives while enjoying the flexibility of remote or hybrid work.

Job Summary

We are seeking a senior‑level AI Engineer who can architect, build, and operate end‑to‑end GenAI and MLOps solutions. The candidate will work with large language models, Retrieval‑Augmented Generation (RAG), agentic workflows, and cloud‑native infrastructures (AWS, Azure) to deliver reliable, production‑ready AI platforms.

Top 3 Critical Skills Table

Skill Why it's critical Mastery Level
Generative AI (RAG, LLMs) Drives core product functionality and user value Senior
MLOps & CI/CD pipelines Ensures models are reliably deployed, monitored, and iterated Senior
Cloud Platforms (AWS, Azure) Provides scalable, secure infrastructure for AI workloads Senior

Interview Preparation

  1. Explain how you would design a Retrieval‑Augmented Generation system using LangChain and a vector database.
    What the interviewer is looking for: Understanding of document indexing, similarity search, prompt engineering, and integration with LLM APIs.
  2. Describe your end‑to‑end MLOps pipeline from model training to production monitoring. Which tools would you choose and why?
    What the interviewer is looking for: Familiarity with CI/CD (GitHub Actions, Jenkins), model registries, containerization (Docker, Kubernetes/EKS), and monitoring solutions (Prometheus, Grafana, SLOs).
  3. How do you handle versioning and rollback of large language models in a cloud environment?
    What the interviewer is looking for: Strategies for model artifact storage (S3, Azure Blob), semantic versioning, A/B testing, canary deployments, and automated rollback triggers.
  4. What are the security considerations when exposing a FastAPI‑based AI microservice publicly?
    What the interviewer is looking for: Authentication/authorization (OAuth2, JWT), rate limiting, input validation, secret management, and compliance (HIPAA, GDPR) for regulated domains.
  5. Compare AWS Bedrock and Azure OpenAI for building enterprise GenAI solutions. When would you choose one over the other?
    What the interviewer is looking for: Knowledge of service capabilities, pricing, data residency, integration with existing cloud services, and vendor lock‑in implications.

Resume Optimization

  • AI Engineer
  • Generative AI
  • Retrieval‑Augmented Generation (RAG)
  • Large Language Models (LLM)
  • LangChain / LangGraph / AutoGen
  • MLOps pipelines
  • CI/CD
  • FastAPI
  • AWS Bedrock / Azure OpenAI
  • Cloud microservices (EKS, Azure ML)

Application Strategy

When reaching out to a recruiter, send a concise email that opens with a friendly greeting, attaches your updated resume, and clearly maps your experience to the role. Highlight your top skills—such as building RAG systems, designing MLOps pipelines, and deploying AI on AWS/Azure—and reference specific projects where you delivered production‑grade AI solutions. End by expressing enthusiasm for the opportunity and offering to discuss how you can add immediate value.

Career Roadmap

Current Role Typical Experience Core Focus Next Position
AI / GenAI / MLOps Engineer 10+ years Build & scale production AI systems Senior AI Engineer
Senior AI Engineer 12‑15 years Lead architecture, mentor teams AI Engineering Manager
AI Engineering Manager 15+ years Strategy, cross‑functional delivery Director of AI Engineering