Cloud Data Engineer (Ab Initio / Python / SQL / AWS)
Enfycon
Location: Richmond, VA or McLean, VA (Onsite)
Job Type: Contract
Salary: Competitive
Duration: 9 Months
Experience: 4+ years, Ab Initio, Python, SQL, AWS
Job Description & Details
"Data engineering in the cloud is exploding as companies move massive workloads to AWS, and expertise in tools like Ab Initio, Python, and SQL is in high demand. This 9\u2011month contract offers a fast\u2011paced Agile environment where you can shape enterprise\u2011scale pipelines and work alongside machine\u2011learning and distributed\u2011systems teams. If you thrive on building scalable, secure data solutions, this role is a prime stepping stone for a cloud\u2011focused career.\n\n# Job Summary\nWe are looking for a Cloud Data Engineer to design, develop, and maintain ETL workflows using Ab Initio, build data pipelines with Python and SQL, and implement scalable solutions on AWS. The role works on\u2011site in Richmond or McLean, VA, collaborates with Agile teams, and contributes to architecture discussions for big\u2011data and micro\u2011service platforms such as Redshift and Snowflake.\n\n# Top 3 Critical Skills Table\n| Skill | Why it's critical | Mastery Level |\n|---|---|---|\n| Ab Initio | Core ETL engine for enterprise data flows | Senior |\n| Python | Enables data processing, automation, and orchestration | Senior |\n| AWS | Platform for scalable storage, compute, and data warehousing | Senior |\n\n# Interview Preparation\n1. **Describe how you would design an end\u2011to\u2011end ETL pipeline in Ab Initio for a high\u2011volume data source.**\n *What the interviewer is looking for:* Understanding of Ab Initio components, parallelism, error handling, and performance tuning.\n2. **Explain a situation where you optimized a slow SQL query in a data warehouse. What steps did you take?**\n *What the interviewer is looking for:* Ability to analyze execution plans, use indexes, rewrite joins, and apply partitioning.\n3. **How do you secure data in transit and at rest when building pipelines on AWS?**\n *What the interviewer is looking for:* Knowledge of IAM policies, KMS encryption, VPC, SSL/TLS, and best\u2011practice configurations.\n4. **Walk me through a Python script you wrote to orchestrate data movement between S3 and Redshift.**\n *What the interviewer is looking for:* Proficiency with boto3, handling large files, idempotency, and error logging.\n5. **In an Agile setting, how do you handle changing data requirements mid\u2011sprint without compromising pipeline stability?**\n *What the interviewer is looking for:* Experience with backlog grooming, feature toggles, modular code, and communication with product owners.\n\n# Resume Optimization\n- Ab Initio\n- Python\n- SQL\n- AWS\n- Redshift\n- Snowflake\n- ETL workflows\n- Agile development\n- Data pipelines\n- Cloud data engineering\n\n# Application Strategy\nWhen you email the recruiter, start with a friendly greeting, attach your updated resume, and clearly reference the Cloud Data Engineer role. Highlight your top skills\u2014such as Ab Initio, Python, and AWS\u2014and describe a recent project where you built or optimized a data pipeline. Make sure to mention any experience with Redshift or Snowflake and emphasize your ability to work on\u2011site in Richmond or McLean.\n\n# Career Roadmap\n| Current Role | Typical Experience | Core Focus | Next Position |\n|---|---|---|---|\n| Cloud Data Engineer | 4\u20115 years | Build/maintain cloud ETL pipelines | Senior Cloud Data Engineer |\n| Senior Cloud Data Engineer | 6\u20118 years | Architecture & mentorship | Lead Data Engineer |\n| Lead Data Engineer | 8\u201110+ years | Strategy, cross\u2011team leadership | Data Engineering Manager |\n| Data Engineering Manager | 10+ years | Organizational ownership, roadmap | Director of Data Engineering |\n"