Atlantis IT group

Atlantis IT group

Data engineer/Analyst lead

Springfield, Illinois, USFull-time19 hours ago

Company

Atlantis IT group

Job type

Full-time

Location

Springfield, Illinois, US

Posted

19 hours ago

Salary

Not specified
Resume Examples

Browse professional resume examples with key skills and ATS-friendly formatting.

Browse examples

Tailor Your Resume to This Job

Mokaru reads this job description and creates a tailored resume for you, ready to send.

Create tailored resume

Job description

Data engineer/Analyst lead Location: Chicago, IL

Key skills : Snowflake, Pyspark Must have more than 12yrs of exp. Any valid work auth in USA

Job Summary

We are seeking a highly skilled Data Engineer with strong expertise in PySpark and Snowflake to support data platform modernization and analytics initiatives for AbbVie. The ideal candidate will design, develop, and optimize scalable data pipelines and data warehouse solutions to enable advanced analytics and business intelligence across enterprise systems.

Key Responsibilities

Design, build, and maintain scalable ETL/ELT pipelines using PySpark for large-scale data processing

Develop and optimize Snowflake data warehouse solutions, including schema design, data modeling (star/snowflake schema), and performance tuning

Implement data ingestion frameworks from multiple sources (APIs, databases, flat files, and streaming sources) into Snowflake

Write efficient SQL queries and Snowflake procedures for data transformation and analytics

Work with cloud platforms (AWS/Azure/GCP) to integrate data pipelines and storage solutions

Collaborate with data analysts, scientists, and business stakeholders to translate requirements into technical solutions

Ensure data quality, governance, and security standards are maintained

Optimize data pipelines for performance, scalability, and cost efficiency

Automate workflows using orchestration tools such as Airflow, Azure Data Factory, or AWS Glue

Monitor, troubleshoot, and support production data pipelines

Responsibilities

  • Design, build, and maintain scalable ETL/ELT pipelines using PySpark for large-scale data processing
  • Develop and optimize Snowflake data warehouse solutions, including schema design, data modeling (star/snowflake schema), and performance tuning
  • Implement data ingestion frameworks from multiple sources (APIs, databases, flat files, and streaming sources) into Snowflake
  • Write efficient SQL queries and Snowflake procedures for data transformation and analytics
  • Work with cloud platforms (AWS/Azure/GCP) to integrate data pipelines and storage solutions
  • Collaborate with data analysts, scientists, and business stakeholders to translate requirements into technical solutions
  • Ensure data quality, governance, and security standards are maintained
  • Optimize data pipelines for performance, scalability, and cost efficiency
  • Automate workflows using orchestration tools such as Airflow, Azure Data Factory, or AWS Glue
  • Monitor, troubleshoot, and support production data pipelines

Qualifications

  • Must have more than 12yrs of exp
  • Any valid work auth in USA
  • We are seeking a highly skilled Data Engineer with strong expertise in PySpark and Snowflake to support data platform modernization and analytics initiatives for AbbVie
  • The ideal candidate will design, develop, and optimize scalable data pipelines and data warehouse solutions to enable advanced analytics and business intelligence across enterprise systems

Stand out from other applicants

AI reads this job description and tailors your resume to match, optimized for ATS filters.

Similar jobs

Ready to land your next role?

Join thousands of professionals who use Mokaru to manage their job search. AI-powered resume tailoring, application tracking, and more.

Create Free Resume