Quincy, MA (Onsite)
Full-time
Not Specified
The demand for high‑performance, scalable front‑end applications is soaring as businesses race to deliver seamless digital experiences. Companies are seeking seasoned UI engineers who can shape React architectures and mentor teams, making this Sr UI Developer role a rare chance to lead cutting‑edge projects on‑site in Quincy, MA. If you have a decade of hands‑on React expertise, this position offers the platform to drive technical strategy and make an immediate impact.
# Job Summary
We are looking for a senior UI developer to design, build, and maintain high‑performance web applications using React.js and TypeScript. The role involves defining architectural standards, leading technical strategy, mentoring junior developers, and collaborating closely with backend teams to ensure code quality, security, and optimal performance. This is an onsite, full‑time position based in Quincy, MA.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|---|---|---|
| React.js | Core library for building scalable UI; performance & maintainability depend on deep knowledge | Senior |
| TypeScript | Provides static typing for large codebases, reducing bugs and improving collaboration | Senior |
| Front‑end Architecture | Defines component structure, state management, and integration with backend services | Senior |
# Interview Preparation
1. **How would you structure a large React application to ensure scalability and maintainability?**
*What the interviewer is looking for:* Understanding of folder hierarchy, component decomposition, state‑management patterns (Redux, Context), code‑splitting, and lazy loading.
2. **Explain the benefits of using TypeScript in a React project and how you enforce typing across the codebase.**
*What the interviewer is looking for:* Knowledge of static typing advantages, tsconfig configuration, strict mode, and tooling (eslint, prettier) for consistency.
3. **Describe a time you defined architectural standards for a front‑end team. What guidelines did you set and why?**
*What the interviewer is looking for:* Experience with style guides, component libraries, testing strategies, performance budgets, and how those standards improved quality.
4. **How do you collaborate with backend engineers to ensure API contracts are reliable and secure?**
*What the interviewer is looking for:* Use of OpenAPI/Swagger, contract testing, versioning, error handling, and security practices like input validation.
5. **What strategies do you employ to mentor junior developers and raise the overall code quality of the team?**
*What the interviewer is looking for:* Pair programming, code reviews, knowledge‑sharing sessions, onboarding docs, and measurable improvement metrics.
# Resume Optimization
- React.js
- TypeScript
- UI development
- Front‑end architecture
- Scalable web applications
- High‑performance
- Technical strategy
- Mentor developers
- Backend collaboration
- Code quality
# Application Strategy
When reaching out to the recruiter, send a concise email that starts with a polite greeting, attaches your updated resume, and clearly references the Sr UI Developer role. Highlight your top skills—especially React.js, TypeScript, and front‑end architecture—and mention any relevant projects where you led technical strategy or mentored a team. Be sure to include your visa status, current location, and a link to your LinkedIn profile as requested.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|---|---|---|---|
| Sr UI Developer | 10+ years in front‑end, proven leadership | Architecture, mentoring, cross‑team collaboration | Lead UI Engineer / Front‑end Architect |
| Lead UI Engineer | 2–4 years leading multiple projects | Strategic roadmap, performance optimization | Front‑end Director |
| Front‑end Director | 5+ years overseeing UI strategy across org | Organizational leadership, technology vision | VP of Engineering (Front‑end) |
Remote
Contract
Not Specified
The backend ecosystem is exploding with demand for high‑performance services, and companies are racing to modernize with Go, Java, and Python. This niche role lets you work remotely while leveraging your full‑stack language expertise on cutting‑edge projects. It’s a rare chance to join a C2C arrangement that values deep technical skill and flexibility.
# Job Summary
We are looking for a seasoned Backend Developer proficient in Golang, Java, and Python to design, build, and maintain scalable server‑side applications. The candidate will work remotely, collaborating with cross‑functional teams to deliver high‑throughput APIs and microservices, while adhering to EST working hours.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|-------|-------------------|--------------|
| Go (Golang) | Core language for high‑concurrency backend services | Senior |
| Java | Enables integration with legacy enterprise systems and robust tooling | Mid |
| Python | Ideal for scripting, automation, and rapid prototyping of services | Mid |
# Interview Preparation
1. **Question**: Explain how Go's goroutine scheduler works and how you would avoid race conditions in a high‑traffic service.
**What the interviewer is looking for**: Understanding of concurrency primitives, channels, mutexes, and best practices for safe parallel execution.
2. **Question**: Describe the differences between Java's garbage collection algorithms and when you would choose one over another.
**What the interviewer is looking for**: Knowledge of JVM memory management, GC tuning, and impact on latency‑sensitive applications.
3. **Question**: How would you design a RESTful API that supports both JSON and Protobuf payloads using Python?
**What the interviewer is looking for**: Ability to implement content negotiation, serialization strategies, and maintain backward compatibility.
4. **Question**: Walk through the steps you take to profile and optimize a Go microservice experiencing latency spikes.
**What the interviewer is looking for**: Familiarity with pprof, tracing, bottleneck identification, and code‑level optimizations.
5. **Question**: In a C2C remote setup, how do you ensure effective communication across EST time zones and maintain project momentum?
**What the interviewer is looking for**: Soft‑skill awareness of asynchronous communication tools, stand‑ups, and clear documentation practices.
# Resume Optimization
- Backend Development
- Golang
- Java
- Python
- Microservices
- RESTful APIs
- C2C Contract
- Remote Work
- EST Timezone
- Visa Sponsorship (H1B, H4EAD, L2)
# Application Strategy
When reaching out to the recruiter, send a concise email that starts with a friendly greeting, attaches your up‑to‑date resume, and clearly maps your experience to the role. Highlight your top backend skills, such as Golang concurrency, Java enterprise integration, and Python automation, and reference any relevant projects where you delivered scalable services. Make sure to mention your eligibility for the listed visa categories and your comfort working EST hours.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|--------------|-------------------|------------|---------------|
| Junior Backend Engineer | 0‑2 years | Learn language fundamentals, write simple services | Mid‑Level Backend Engineer |
| Mid‑Level Backend Engineer | 2‑5 years | Own microservices, optimize performance, mentor juniors | Senior Backend Engineer |
| Senior Backend Engineer | 5+ years | Architecture design, lead projects, drive tech strategy | Lead Backend Architect / Engineering Manager |
Plainsboro, NJ
Contract
Not Specified
Artificial intelligence and machine learning are reshaping every industry, and companies that master Snowflake’s Cortex platform are gaining a massive competitive edge. As an AI/ML Lead you’ll be at the forefront of building scalable, data‑driven solutions that power next‑generation analytics. This role is a rare chance to combine deep technical expertise with strategic leadership in a high‑growth environment.
# Job Summary
Lead the design, development, and deployment of AI/ML solutions on Snowflake Cortex, mentor a cross‑functional team, and partner with stakeholders to translate business problems into data‑centric products. Own end‑to‑end model lifecycle, ensure performance, security, and scalability, and drive best practices across the organization.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|-------|-------------------|--------------|
| Snowflake Cortex & Snowpark | Core platform for data storage, processing, and model serving | Senior |
| AI/ML Model Development & Deployment | Enables end‑to‑end pipeline from research to production | Senior |
| Technical Leadership & Team Mentoring | Drives alignment, quality, and rapid delivery across teams | Senior |
# Interview Preparation
1. **Explain how you would architect an end‑to‑end ML pipeline on Snowflake Cortex.**
*What the interviewer is looking for:* Understanding of Snowpark, data ingestion, feature engineering, model training, and deployment within Snowflake.
2. **Describe a situation where you had to optimize model performance on large datasets. What techniques did you use?**
*What the interviewer is looking for:* Experience with distributed computing, data partitioning, model quantization, or hyperparameter tuning at scale.
3. **How do you ensure data security and governance when building AI solutions on a cloud data warehouse?**
*What the interviewer is looking for:* Knowledge of role‑based access control, data masking, audit logging, and compliance standards.
4. **Walk us through your approach to mentoring junior data scientists and engineers.**
*What the interviewer is looking for:* Leadership style, coaching methods, and measurable impact on team productivity.
5. **What are the trade‑offs between using Snowpark Python vs. external ML frameworks (e.g., TensorFlow) for model training?**
*What the interviewer is looking for:* Insight into performance, integration, resource management, and maintainability.
# Resume Optimization
- AI/ML Lead
- Snowflake Cortex
- Snowpark
- End‑to‑End ML Pipeline
- Model Deployment
- Data Governance
- Cloud Data Warehouse
- Technical Leadership
- Team Mentoring
- Performance Optimization
# Application Strategy
When reaching out to the recruiter, send a concise email that starts with a friendly greeting, attach your updated resume, and clearly highlight your top relevant skills. Make sure to mention related skills you possess, such as Snowflake Cortex expertise, end‑to‑end ML pipeline development, and proven technical leadership. Reference specific projects where you delivered AI/ML solutions at scale, and align your experience with the key responsibilities listed in the job description.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|--------------|-------------------|------------|---------------|
| AI/ML Lead (Cortex) | 5‑7 years in AI/ML, Snowflake expertise | Architecture, team leadership, production ML | Senior AI/ML Architect |
| Senior AI/ML Architect | 8‑10 years, cross‑cloud deployments | Enterprise AI strategy, innovation pipelines | Director of AI Engineering |
| Director of AI Engineering | 10+ years, multi‑team management | Organizational AI vision, budget, partnership | VP of Data & AI |
Mclean, VA
Full-time
Not Specified
Data engineering is the backbone of modern analytics, and companies are racing to build scalable pipelines that turn raw data into actionable insights. With the surge in cloud adoption, expertise in AWS and distributed processing frameworks like Spark is more valuable than ever. This Senior Data Engineer role in McLean, VA offers a chance to lead complex data projects on-site while leveraging cutting‑edge technologies.
# Job Summary
We are seeking a Senior Data Engineer to design, develop, and maintain robust data pipelines on AWS using Spark (PySpark), SQL, and Python. The role involves data modeling, Hive integration, and ensuring high‑performance data processing for enterprise‑level analytics.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|-------|-------------------|--------------|
| AWS | Provides the cloud infrastructure for scalable storage and compute resources essential for modern data pipelines. | Senior |
| Spark / PySpark | Enables distributed processing of large datasets, reducing latency and supporting real‑time analytics. | Senior |
| Data Modeling | Ensures data is organized efficiently for querying, reporting, and downstream consumption. | Senior |
# Interview Preparation
1. **Explain how you would design an end‑to‑end data pipeline on AWS using Spark and S3.**
*What the interviewer is looking for:* Understanding of AWS services (S3, EMR, Glue), data ingestion, transformation logic, and fault tolerance.
2. **How do you optimize Spark jobs for performance and cost?**
*What the interviewer is looking for:* Knowledge of partitioning, caching, broadcast variables, executor sizing, and cost‑aware cluster configuration.
3. **Describe the differences between Hive tables and Spark DataFrames and when you would use each.**
*What the interviewer is looking for:* Insight into schema management, query performance, and integration scenarios.
4. **What strategies do you employ for data modeling in a data lake vs. a data warehouse?**
*What the interviewer is looking for:* Ability to choose appropriate schema designs (e.g., star, snowflake) and partitioning for analytical workloads.
5. **Walk through a complex SQL query you wrote to transform raw event data into a reporting dataset.**
*What the interviewer is looking for:* Proficiency with advanced SQL concepts, window functions, and query optimization.
# Resume Optimization
- Senior Data Engineer
- AWS
- Spark
- PySpark
- Python
- SQL (Advanced)
- Hive
- Data Modeling
- ETL Pipelines
- Distributed Computing
# Application Strategy
When reaching out to the recruiter, send a concise email greeting, attach your updated resume, and clearly highlight your top skills that match the role. Make sure to mention related skills you possess, such as AWS cloud architecture, Spark/PySpark development, and data modeling expertise, and reference any projects where you built end‑to‑end data pipelines.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|--------------|--------------------|------------|---------------|
| Senior Data Engineer | 5‑7 years in data engineering, cloud platforms, distributed processing | End‑to‑end pipeline design, performance tuning, mentorship | Lead Data Engineer |
| Lead Data Engineer | 7‑10 years, strategic project ownership, team leadership | Architecture governance, cross‑team collaboration, technology road‑mapping | Data Engineering Manager |
| Data Engineering Manager | 10+ years, people management, budgeting, stakeholder alignment | Managing multiple teams, driving innovation, budget oversight | Director of Data Engineering |
| Director of Data Engineering | 12+ years, executive leadership, enterprise data strategy | Vision setting, organization‑wide data initiatives, executive communication | VP of Data & Analytics |
St. Louis, MO
Contract
Not Specified
The healthcare payer landscape is rapidly evolving, demanding engineers who can blend domain expertise with cutting‑edge data solutions. As fraud, waste, and abuse become focal points, organizations need leaders who can build robust pipelines to safeguard revenue. This Lead Data Engineer role offers a chance to drive impact at the intersection of technology and healthcare finance.
# Job Summary
Lead the design, development, and optimization of data platforms supporting healthcare payer systems, payment integrity, and fraud detection. Collaborate with cross‑functional teams to ingest claims data, implement Python and Java processing jobs, and ensure scalable, compliant solutions for the St. Louis region.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|-------|-------------------|---------------|
| Healthcare payer systems & claims | Core domain knowledge to model, cleanse, and analyze complex claims data | Senior |
| Python (data pipelines) | Primary language for ETL, automation, and ML‑ready data preparation | Senior |
| Fraud, Waste & Abuse detection | Drives payment integrity initiatives and reduces financial loss | Senior |
# Interview Preparation
1. **Describe your experience building end‑to‑end data pipelines for claims processing.**
*What the interviewer is looking for:* Ability to articulate pipeline architecture, tools used (e.g., Spark, Airflow), handling of data volume, and data quality controls.
2. **How have you implemented fraud or payment‑integrity algorithms in a production environment?**
*What the interviewer is looking for:* Understanding of statistical/ML techniques, real‑time scoring, and integration with downstream systems.
3. **Explain the challenges of working with healthcare payer data and how you ensure compliance (HIPAA, etc.).**
*What the interviewer is looking for:* Knowledge of data privacy, encryption, access controls, and audit logging.
4. **Compare Python and Java for large‑scale data processing; when would you choose one over the other?**
*What the interviewer is looking for:* Insight into performance, ecosystem libraries, team expertise, and maintainability.
5. **What strategies do you use to optimize query performance on massive claim datasets?**
*What the interviewer is looking for:* Experience with partitioning, indexing, caching, and cost‑based optimization.
# Resume Optimization
- Lead Data Engineer
- Healthcare payer systems
- Payment Integrity
- Fraud, Waste & Abuse detection
- Claims data processing
- Python
- Java
- Data pipelines
- St. Louis
- C2C contract
# Application Strategy
When reaching out to the recruiter, send a concise email that starts with a friendly greeting, attaches your resume, and clearly highlights your top skills. Make sure to mention related skills you possess, such as Python, Java, and healthcare payer data expertise, and reference any projects where you built fraud‑detection pipelines or managed large claim datasets.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|--------------|--------------------|------------|---------------|
| Lead Data Engineer | 5‑7 years data engineering, healthcare domain | Architecture, mentorship, fraud analytics | Data Engineering Manager |
| Data Engineering Manager | 8‑10 years, team leadership | Strategy, cross‑team delivery, budget | Director of Data Engineering |
Alpharetta, GA
Contract
Not Specified
The surge in autonomous AI and agent automation is reshaping how products are built, making Python full‑stack engineers more valuable than ever. Companies are hunting talent who can blend robust backend skills with cutting‑edge LLM integration to deliver intelligent, end‑to‑end solutions. This role in Alpharetta offers a unique C2C opportunity to dive deep into OpenAI technologies while shaping next‑generation automation platforms.
# Job Summary
We are seeking a Python Full‑Stack Engineer to design, develop, and maintain end‑to‑end web applications that incorporate autonomous AI agents and large language models. The candidate will collaborate with product and data teams to implement prompt‑engineered solutions, integrate OpenAI APIs, and ensure scalable, secure deployments for local clients in Alpharetta, GA.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|-------|-------------------|---------------|
| Python Full‑Stack Development | Powers both front‑end and back‑end of AI‑driven apps | Senior |
| LLM Integration (OpenAI) | Enables autonomous agent capabilities and natural language features | Senior |
| Prompt Engineering | Crafts effective queries to extract accurate model outputs | Mid |
# Interview Preparation
1. How do you design a full‑stack architecture that supports real‑time LLM inference?
*What the interviewer is looking for:* Understanding of API gateways, async processing, caching, and latency mitigation.
2. Explain the process of prompt engineering for an autonomous agent. What metrics do you use to evaluate effectiveness?
*What the interviewer is looking for:* Knowledge of prompt design, temperature, token limits, and evaluation via relevance or task success rates.
3. Describe a situation where you integrated OpenAI’s API into a web application. What challenges did you face and how did you resolve them?
*What the interviewer is looking for:* Practical experience, handling authentication, rate limits, error handling, and security.
4. What security considerations are essential when exposing AI services to external users?
*What the interviewer is looking for:* Awareness of data privacy, API key management, input sanitization, and compliance.
5. How would you optimize a Python backend to handle high‑throughput AI requests while keeping costs low?
*What the interviewer is looking for:* Strategies like batching, async I/O, caching, and efficient resource provisioning.
# Resume Optimization
- Python
- Full‑Stack Development
- OpenAI API
- Large Language Models (LLM)
- Autonomous AI
- Agent Automation
- Prompt Engineering
- C2C Engagement
- Cloud Deployment (AWS/Azure)
- RESTful APIs
# Application Strategy
When reaching out to the recruiter, send a concise email greeting, attach your updated resume, and clearly highlight your top skills that match the role. Make sure to mention related skills you possess, such as Python full‑stack development, OpenAI LLM integration, and prompt engineering, and reference any relevant projects where you built autonomous AI agents.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|--------------|--------------------|------------|---------------|
| Python Full‑Stack Engineer | 0‑3 years | Build AI‑enhanced web apps | Senior Full‑Stack Engineer |
| Senior Full‑Stack Engineer | 3‑6 years | Lead complex integrations, mentor junior devs | AI Solutions Architect |
| AI Solutions Architect | 6‑10 years | Design enterprise AI platforms, strategy | Director of AI Engineering |
Coppell, TX 75019
Contract
Not Specified
Cloud networking is at the heart of modern digital transformation, and Azure continues to dominate enterprise infrastructure. Companies are seeking engineers who can seamlessly integrate and remediate complex network environments in the cloud. This Core Network Engineer role offers a hands‑on opportunity in Texas to drive Azure networking projects on a C2C basis.
# Job Summary
We are looking for a Core Network Engineer specialized in Azure to design, implement, and troubleshoot network integration and remediation tasks. The role is on‑site in Coppell, TX, and operates under a Corp‑to‑Corp arrangement, requiring strong Azure networking expertise and the ability to work independently.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|---|---|---|
| Network Integration & Remediation | Guarantees smooth migration and rapid issue resolution across hybrid and Azure‑only environments | Senior |
| Azure Networking | Forms the backbone of cloud infrastructure—VNet design, subnets, security groups, and routing | Senior |
| C2C Contract Management | Enables independent work, compliance handling, and effective client communication | Mid |
# Interview Preparation
1. **How do you design a resilient Azure Virtual Network architecture?**
*What the interviewer is looking for:* Understanding of hub‑spoke topology, subnet segmentation, NSGs, Azure Firewall, and redundancy options.
2. **Explain the steps you take to troubleshoot a connectivity issue between an on‑premises data center and Azure.**
*What the interviewer is looking for:* Knowledge of VPN/ExpressRoute, routing tables, BGP, and diagnostic tools like Network Watcher.
3. **What strategies do you use for network remediation after a migration to Azure?**
*What the interviewer is looking for:* Experience with post‑migration validation, performance tuning, security hardening, and documentation.
4. **Describe a time you had to integrate a legacy network service into Azure. What challenges did you face?**
*What the interviewer is looking for:* Ability to handle compatibility issues, protocol translation, and minimal downtime.
5. **How do you manage and secure network traffic flow in a multi‑tenant Azure environment?**
*What the interviewer is looking for:* Use of NSGs, Azure Policy, service endpoints, and monitoring solutions.
# Resume Optimization
- Core Network Engineer
- Azure Networking
- Network Integration
- Network Remediation
- C2C Contract
- Hybrid Cloud
- VPN/ExpressRoute
- Hub‑Spoke Architecture
- Network Watcher
- Texas (Coppell)
# Application Strategy
When reaching out to the recruiter, send a concise email that starts with a friendly greeting, attach your updated resume, and clearly highlight your top skills that match the role. Make sure to mention related skills you possess, such as **Azure networking**, **network integration & remediation**, and **experience working on C2C contracts**. Reference any relevant projects where you designed or troubleshooted Azure networks, and explicitly map those experiences to the job requirements.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|---|---|---|---|
| Core Network Engineer (Azure) | 2‑4 years in network ops & Azure | Azure network design & remediation | Senior Network Engineer (Azure) |
| Senior Network Engineer (Azure) | 5‑7 years, leading projects | End‑to‑end Azure architecture | Network Architecture Lead |
| Network Architecture Lead | 8+ years, strategic planning | Multi‑cloud strategy & governance | Director of Network Engineering |
Austin, TX (Hybrid – Local candidates only)
Full-time
Not Specified
The demand for data‑driven decision‑making in healthcare is soaring, making BI roles more critical than ever. As a Senior Power BI/Tableau Developer you’ll shape analytics for a state health platform, directly impacting public health outcomes. This position offers a unique blend of advanced visualization, ETL mastery, and cloud data warehousing—all in a hybrid Austin setting.
# Job Summary
We are seeking a seasoned Senior BI Developer/Data Engineer to design, develop, and maintain end‑to‑end data pipelines and interactive dashboards using Informatica (IICS), Power BI or Tableau, and SQL. The role focuses on delivering scalable analytics solutions for a state health platform, leveraging Snowflake for cloud warehousing and ensuring data quality and performance.
# Top 3 Critical Skills Table
| Skill | Why it's critical | Mastery Level |
|-------|-------------------|--------------|
| Informatica (ETL/IICS) | Core engine for extracting, transforming, and loading health data across systems | Senior |
| Power BI / Tableau | Primary tools for visualizing insights and enabling self‑service analytics | Senior |
| Snowflake (or SQL & Data Pipelines) | Provides the scalable cloud data warehouse that powers fast query performance | Senior |
# Interview Preparation
1. **Explain how you would design an end‑to‑end ETL workflow in Informatica IICS for ingesting patient encounter data.**
*What the interviewer is looking for:* Understanding of source profiling, mapping design, performance tuning, error handling, and orchestration.
2. **Walk through the steps to optimize a Power BI dashboard that queries a large Snowflake table.**
*What the interviewer is looking for:* Knowledge of query folding, aggregations, incremental refresh, and DAX performance tricks.
3. **Describe a scenario where you had to migrate an on‑premise data warehouse to Snowflake. What challenges did you face and how did you resolve them?**
*What the interviewer is looking for:* Experience with data modeling changes, data loading strategies, security considerations, and cost optimization.
4. **How do you ensure data quality and lineage when building pipelines that combine multiple source systems?**
*What the interviewer is looking for:* Use of data profiling, validation rules, metadata management, and documentation practices.
5. **Compare Tableau vs. Power BI for a large‑scale health analytics project. Which would you choose and why?**
*What the interviewer is looking for:* Ability to evaluate tool strengths, licensing, integration with Snowflake, and end‑user adoption.
# Resume Optimization
- Senior BI Developer
- Data Engineer
- Informatica IICS
- Power BI
- Tableau
- SQL
- Snowflake
- Data Pipelines
- ETL Development
- Healthcare Analytics
# Application Strategy
When reaching out to the recruiter, send a concise email that opens with a friendly greeting, attaches your updated resume, and clearly maps your experience to the role. Highlight your top skills—such as Informatica, Power BI/Tableau, and Snowflake—and reference specific projects where you built data pipelines or health‑focused dashboards. Make sure to mention your 12+ years of BI experience and your ability to work hybrid in Austin.
# Career Roadmap
| Current Role | Typical Experience | Core Focus | Next Position |
|--------------|--------------------|------------|---------------|
| Senior BI Developer / Data Engineer | 10‑12+ years, strong ETL & visualization expertise | End‑to‑end data pipelines, dashboard strategy, cloud warehousing | Lead BI Architect (15+ years, cross‑team leadership, strategic roadmap) |
| Lead BI Architect | 15‑18 years, enterprise‑scale analytics governance | Architecture, stakeholder alignment, technology selection | Director of Analytics (20+ years, business impact, portfolio management) |
| Director of Analytics | 20+ years, P&L responsibility, innovation leadership | Organizational analytics vision, budget, talent development | VP of Data & Insights (Executive leadership, enterprise transformation) |