Quest Diagnostics
Senior Cloud Engineer
Job description
Job Description
The Cloud Operations and Data Quality Engineer is responsible for the deployment, monitoring, maintenance, and optimization of cloud infrastructure supporting Quest Diagnostics' Health Analytics Solutions (HAS) data platforms and analytics applications. This role ensures high availability, performance, and security of cloud-based systems while implementing best practices for infrastructure as code, automation, and operational excellence. The Cloud Operations Engineer works closely with data engineering, analytics, and development teams to support production workloads, troubleshoot issues, and implement scalable solutions that meet business and compliance requirements. This position plays a critical role in maintaining the reliability and efficiency of cloud infrastructure for HAS.
In addition to the cloud oversight responsibilities, the role is responsible for establishing and operationalizing a data management and data quality framework for the HAS data product. The role will collaborate with the Data Platform Product Manager to define data governance standards, implement automated quality controls, and ensure data assets are accurate, reliable, and fit for purpose.
Responsibilities:
Cloud Infrastructure Management:
• Deploy, configure, and maintain cloud infrastructure on AWS, Azure, or GCP platforms
• Manage compute resources, storage systems, networking, and security configurations
• Implement and maintain infrastructure as code using tools like Terraform, CloudFormation, or ARM templates
• Optimize cloud resource utilization and manage cost efficiency initiatives
• Maintain documentation of infrastructure architecture, configurations, and procedures
Monitoring and Incident Response:
• Implement and maintain monitoring solutions for infrastructure, applications, and data pipelines
• Configure alerts and dashboards using tools like CloudWatch, Azure Monitor, Datadog, or similar
• Coordinate with cross-functional teams to respond to incidents, outages, and performance issues and assist with incident resolution
Automation and DevOps:
• Develop and maintain automation scripts using Python, PowerShell, or Bash
• Implement CI/CD pipelines for infrastructure deployment and application releases
• Automate routine operational tasks to improve efficiency and reduce manual effort
Security and Compliance:
• Implement and maintain security controls for cloud infrastructure and data platforms per Quest policies
• Ensure compliance with HIPAA, HITRUST, SOC 2, and other regulatory requirements
• Maintain audit logs and support compliance reporting activities
• Coordinate with security and compliance teams on assessments and audits
Data Management and Platform Operations:
• Support data platform infrastructure including data warehouses (Snowflake, Redshift), data lakes, and ETL systems
• Monitor data pipeline performance and troubleshoot processing issues
• Manage the data quality framework include quality rules, monitoring standards, and issue resolution process
• Implement automated data validation and monitoring controls across ingestion, transformation, and delivery pipelines
• Partner with Data Platform management to define and develop data quality KPI reporting
• Implement data retention and archival policies
• Manage database systems, backup and recovery procedures
• Optimize query performance and resource allocation for analytical workloads
• Coordinate with data engineering teams on platform upgrades and maintenance
• Implement data retention and archival policies
Performance Optimization:
• Monitor system performance metrics and identify optimization opportunities
• Conduct capacity planning and resource scaling activities
• Optimize infrastructure costs through rightsizing, reserved instances, and resource cleanup
Collaboration and Support:
• Partner with data engineering, analytics, and development teams on infrastructure requirements
• Provide technical guidance and support for cloud infrastructure questions
• Collaborate with vendors and cloud service providers on technical issues
• Support deployment activities and release coordination
• Create and maintain technical documentation and knowledge base articles
• Train team members on cloud operations procedures and best practices
Qualifications:
Required Work Experience:
• 3+ years of experience in cloud operations, DevOps, or systems administration
• 3+ years of experience in data management, data quality, data governance, or data engineering within a production analytics environment
• Hands on experience designing and implementing data quality controls across ingestion, transformation, and reporting layers
• Experience defining and operationalizing data governance standards, including data ownership, business definitions, and critical data elements
• 2+ years of hands-on experience with major cloud platforms (AWS, Azure, or GCP)
• Experience deploying and managing production infrastructure in cloud environments
• Proven track record of managing highly available, scalable systems
• Experience with infrastructure as code and automation tools
• Experience supporting data platforms or analytics workloads
• Background in healthcare, life sciences, or regulated industries with compliance requirements
Preferred Work Experience:
• 5+ years of cloud operations or DevOps experience
• Multi-cloud experience (AWS, Azure, and/or GCP)
• Experience with containerization and orchestration (Docker, Kubernetes, ECS)
• Background supporting Snowflake, Redshift, BigQuery, or other cloud data warehouses
• Experience with observability platforms (Datadog, New Relic, Splunk, or similar)
• Previous experience in healthcare with HIPAA compliance
• Experience with data pipeline orchestration tools (Airflow, dbt, Prefect)
• Experience implementing automated data quality monitoring tools (rule engines, pipeline validations, anomaly detection)
• Experience building and maintaining data dictionaries, metadata repositories, and lineage documentation
• Experience supporting AI/ML or advanced analytics use cases requiring high data accuracy and consistency
Knowledge, Skills, and Abilities:
Cloud Platform Expertise:
• Strong knowledge of AWS services (EC2, S3, RDS, Lambda, CloudWatch, IAM) or equivalent Azure/GCP services
• Understanding of cloud networking (VPC, subnets, security groups, load balancers, DNS)
• Experience with cloud security best practices and identity management
• Knowledge of cloud cost optimization strategies and tools
• Familiarity with cloud compliance frameworks and controls
Infrastructure and Automation:
• Proficiency with infrastructure as code tools (Terraform, CloudFormation, ARM templates, Pulumi)
• Strong scripting skills in Python, PowerShell, or Bash
• Experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, Azure DevOps)
• Knowledge of configuration management tools
• Understanding of version control systems (Git) and branching strategies
Data and Database Systems:
• Understanding of data warehouse technologies and SQL query optimization
• Experience with database administration for cloud databases
• Knowledge of ETL/ELT processes and data pipeline architectures
• Understanding of data security and encryption requirements
Monitoring and Troubleshooting:
• Experience implementing monitoring and alerting solutions
• Strong troubleshooting and analytical skills for complex technical issues
• Ability to read and analyze logs, metrics, and traces
• Experience with APM tools and distributed tracing
• Knowledge of incident management and post-mortem processes
Security and Compliance:
• Understanding of HIPAA, HITRUST, SOC 2, and other healthcare compliance requirements
• Knowledge of security best practices for cloud infrastructure
• Experience implementing security controls and conducting security assessments
• Familiarity with vulnerability management and patch management processes
Data Management:
• Strong understanding of data quality dimensions and how to implement measurable controls
• Ability to define data quality rules aligned to business requirements and translate them into technical validation checks
• Knowledge of data lifecycle management, including ingestion, transformation, storage, archival, and consumption
Required Education:
Bachelor’s degree in computer science, Information Technology, Engineering, or related technical field, or equivalent work experience
Preferred Education:
Master’s degree in computer science, Information Technology, or related field
Preferred Certification:
Cloud certifications (AWS Solutions Architect, Azure Administrator, Google Cloud Professional, or equivalent), DevOps or Infrastructure certifications, ITIL Foundation, or security certifications (Security+, CISSP)
55840
Quest Diagnostics honors our service members and encourages veterans to apply.
While we appreciate and value our staffing partners, we do not accept unsolicited resumes from agencies. Quest will not be responsible for paying agency fees for any individual as to whom an agency has sent an unsolicited resume.
Equal Opportunity Employer: Race/Color/Sex/Sexual Orientation/Gender Identity/Religion/National Origin/Disability/Vets or any other legally protected status.
Computer Network Support Specialist Resume Example
See a professional resume example for this role with key skills, action verbs, and ATS-friendly formatting.
View resume exampleResponsibilities
- The Cloud Operations and Data Quality Engineer is responsible for the deployment, monitoring, maintenance, and optimization of cloud infrastructure supporting Quest Diagnostics' Health Analytics Solutions (HAS) data platforms and analytics applications
- This role ensures high availability, performance, and security of cloud-based systems while implementing best practices for infrastructure as code, automation, and operational excellence
- The Cloud Operations Engineer works closely with data engineering, analytics, and development teams to support production workloads, troubleshoot issues, and implement scalable solutions that meet business and compliance requirements
- This position plays a critical role in maintaining the reliability and efficiency of cloud infrastructure for HAS
- In addition to the cloud oversight responsibilities, the role is responsible for establishing and operationalizing a data management and data quality framework for the HAS data product
- The role will collaborate with the Data Platform Product Manager to define data governance standards, implement automated quality controls, and ensure data assets are accurate, reliable, and fit for purpose
- Deploy, configure, and maintain cloud infrastructure on AWS, Azure, or GCP platforms
- Manage compute resources, storage systems, networking, and security configurations
- Implement and maintain infrastructure as code using tools like Terraform, CloudFormation, or ARM templates
- Optimize cloud resource utilization and manage cost efficiency initiatives
- Maintain documentation of infrastructure architecture, configurations, and procedures
- Monitoring and Incident Response:
- Implement and maintain monitoring solutions for infrastructure, applications, and data pipelines
- Configure alerts and dashboards using tools like CloudWatch, Azure Monitor, Datadog, or similar
- Coordinate with cross-functional teams to respond to incidents, outages, and performance issues and assist with incident resolution
- Automation and DevOps:
- Develop and maintain automation scripts using Python, PowerShell, or Bash
- Implement CI/CD pipelines for infrastructure deployment and application releases
- Automate routine operational tasks to improve efficiency and reduce manual effort
- Implement and maintain security controls for cloud infrastructure and data platforms per Quest policies
- Ensure compliance with HIPAA, HITRUST, SOC 2, and other regulatory requirements
- Maintain audit logs and support compliance reporting activities
- Coordinate with security and compliance teams on assessments and audits
- Data Management and Platform Operations:
- Support data platform infrastructure including data warehouses (Snowflake, Redshift), data lakes, and ETL systems
- Monitor data pipeline performance and troubleshoot processing issues
- Manage the data quality framework include quality rules, monitoring standards, and issue resolution process
- Implement automated data validation and monitoring controls across ingestion, transformation, and delivery pipelines
- Partner with Data Platform management to define and develop data quality KPI reporting
- Implement data retention and archival policies
- Manage database systems, backup and recovery procedures
- Optimize query performance and resource allocation for analytical workloads
- Coordinate with data engineering teams on platform upgrades and maintenance
- Implement data retention and archival policies
- Monitor system performance metrics and identify optimization opportunities
- Conduct capacity planning and resource scaling activities
- Optimize infrastructure costs through rightsizing, reserved instances, and resource cleanup
- Partner with data engineering, analytics, and development teams on infrastructure requirements
- Provide technical guidance and support for cloud infrastructure questions
- Collaborate with vendors and cloud service providers on technical issues
- Support deployment activities and release coordination
- Create and maintain technical documentation and knowledge base articles
- Train team members on cloud operations procedures and best practices
- Monitoring and Troubleshooting:
- Experience implementing monitoring and alerting solutions
- Strong troubleshooting and analytical skills for complex technical issues
- Ability to read and analyze logs, metrics, and traces
Qualifications
- Required Work Experience:
- 3+ years of experience in cloud operations, DevOps, or systems administration
- 3+ years of experience in data management, data quality, data governance, or data engineering within a production analytics environment
- Hands on experience designing and implementing data quality controls across ingestion, transformation, and reporting layers
- Experience defining and operationalizing data governance standards, including data ownership, business definitions, and critical data elements
- 2+ years of hands-on experience with major cloud platforms (AWS, Azure, or GCP)
- Experience deploying and managing production infrastructure in cloud environments
- Proven track record of managing highly available, scalable systems
- Experience with infrastructure as code and automation tools
- Experience supporting data platforms or analytics workloads
- Background in healthcare, life sciences, or regulated industries with compliance requirements
- Cloud Platform Expertise:
- Strong knowledge of AWS services (EC2, S3, RDS, Lambda, CloudWatch, IAM) or equivalent Azure/GCP services
- Understanding of cloud networking (VPC, subnets, security groups, load balancers, DNS)
- Experience with cloud security best practices and identity management
- Knowledge of cloud cost optimization strategies and tools
- Familiarity with cloud compliance frameworks and controls
- Proficiency with infrastructure as code tools (Terraform, CloudFormation, ARM templates, Pulumi)
- Strong scripting skills in Python, PowerShell, or Bash
- Experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, Azure DevOps)
- Knowledge of configuration management tools
- Understanding of version control systems (Git) and branching strategies
- Data and Database Systems:
- Understanding of data warehouse technologies and SQL query optimization
- Experience with database administration for cloud databases
- Knowledge of ETL/ELT processes and data pipeline architectures
- Understanding of data security and encryption requirements
- Experience with APM tools and distributed tracing
- Knowledge of incident management and post-mortem processes
- Understanding of HIPAA, HITRUST, SOC 2, and other healthcare compliance requirements
- Knowledge of security best practices for cloud infrastructure
- Experience implementing security controls and conducting security assessments
- Familiarity with vulnerability management and patch management processes
- Strong understanding of data quality dimensions and how to implement measurable controls
- Ability to define data quality rules aligned to business requirements and translate them into technical validation checks
- Knowledge of data lifecycle management, including ingestion, transformation, storage, archival, and consumption
- Bachelor’s degree in computer science, Information Technology, Engineering, or related technical field, or equivalent work experience
- Master’s degree in computer science, Information Technology, or related field
- Cloud certifications (AWS Solutions Architect, Azure Administrator, Google Cloud Professional, or equivalent), DevOps or Infrastructure certifications, ITIL Foundation, or security certifications (Security+, CISSP)
Track your job applications with Mokaru
Save jobs, track applications, and let AI tailor your resume for each position.
Similar jobs
VirtualVocations
Tampa, US
Philip Morris International U.S.
Tampa, US
Lensa
Tampa, US
MetLife
Tampa, US
Ascii Group, LLC
Tampa, US
Suncoast Credit Union
Tampa, US
Ready to land your next role?
Join thousands of professionals who use Mokaru to manage their job search. AI-powered resume tailoring, application tracking, and more.
Create Free Resume