"The data platform landscape is exploding as companies chase real\u2011time analytics and AI, making Databricks expertise more valuable than ever. A senior Databricks Administrator role in Austin puts you at the heart of large\u2011scale data, analytics, and machine\u2011learning workloads. This opportunity offers hybrid work, cutting\u2011edge technology, and a chance to shape enterprise\u2011wide data governance.\n\n# Job Summary\nWe are seeking a senior\u2011level Databricks Administrator to own end\u2011to\u2011end platform operations on AWS, including cluster configuration, workspace management, job scheduling, security, and automation. The role will drive performance tuning for Apache Spark, enforce governance via Unity Catalog, and support AI/ML pipelines with MLflow, while collaborating with engineering teams to implement Terraform\u2011based CI/CD.\n\n# Top 3 Critical Skills Table\n| Skill | Why it's critical | Mastery Level |\n|-------|-------------------|---------------|\n| Databricks Administration (AWS) | Core platform stability & performance for analytics & AI workloads | Senior |\n| Apache Spark Performance Tuning | Ensures fast job execution and cost\u2011effective compute usage | Senior |\n| Terraform & CI/CD Automation | Enables repeatable, compliant infrastructure deployments at scale | Senior |\n\n# Interview Preparation\n**1. How do you design a secure Databricks workspace on AWS with Unity Catalog and SCIM integration?**\n*What the interviewer is looking for:* Understanding of IAM, RBAC, SCIM provisioning, and data governance best practices.\n\n**2. Explain the steps you would take to troubleshoot a Spark job that is running slower than expected.**\n*What the interviewer is looking for:* Ability to analyze Spark UI, identify bottlenecks, adjust configurations, and optimize data formats.\n\n**3. Describe how you would implement Terraform to provision Databricks clusters and enforce CI/CD pipelines.**\n*What the interviewer is looking for:* Experience with IaC, modular Terraform code, state management, and integration with GitOps.\n\n**4. What are the key considerations for integrating S3 governance policies with Databricks data assets?**\n*What the interviewer is looking for:* Knowledge of bucket policies, encryption, access controls, and audit logging.\n\n**5. How have you supported MLflow or other AI/ML workloads on Databricks in production?**\n*What the interviewer is looking for:* Hands\u2011on experience with model registry, experiment tracking, and scaling ML pipelines.\n\n# Resume Optimization\n- Databricks Administration\n- AWS\n- Apache Spark\n- Unity Catalog\n- Terraform\n- CI/CD Automation\n- MLflow\n- S3 Governance\n- RBAC / SCIM\n- Cluster Configuration\n\n# Application Strategy\nWhen reaching out to the recruiter, send a concise email that greets them, briefly states your interest, and attaches your resume. Explicitly highlight your top skills such as Databricks administration, Spark performance tuning, and Terraform automation, and reference any relevant projects where you implemented these technologies. Make sure to mention your experience with AWS, Unity Catalog, and supporting AI/ML workloads to align directly with the job requirements.\n\n# Career Roadmap\n| Current Role | Typical Experience | Core Focus | Next Position |\n|--------------|--------------------|------------|---------------|\n| Senior Databricks Administrator | 10\u201115 years in data platform operations | Platform governance, performance, automation | Lead Data Platform Engineer |\n| Lead Data Platform Engineer | 12\u201118 years, cross\u2011team leadership | Strategy, large\u2011scale architecture, mentorship | Data Platform Director |\n| Data Platform Director | 15+ years, executive stakeholder management | Vision, budgeting, enterprise data strategy | VP of Data Engineering |\n"