statefarm
MID-LEVEL DATA ENGINEER-Python, AWS, Spark (Hybrid)
Company
Role
MID-LEVEL DATA ENGINEER-Python, AWS, Spark (Hybrid)
Location
Job type
Full-time
Posted
3 hours ago
Salary
Job description
Overview Being good neighbors – helping people, investing in our communities, and making the world a better place – is who we are at State Farm. It is at the core of how we operate and the reason for our success. Come join a #1 team and do some good! HYBRID Qualified candidates must live within close proximity to a hub location listed below and should plan to spend time working from home and some time working in the office as part of our hybrid work environment . HUB LOCATIONS: Bloomington, IL ; Richardson, TX; Tempe, AZ; or Dunwoody, GA SPONSORSHIP: Applicants for this position are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunity. ***Application deadline is expected to close on Friday 5/22/2026 at 5pm CT. Applicant volume and hiring needs may result in early closure or extension beyond the listed deadline. To submit an application, click "Apply" on the job listing page on the State Farm career site.*** Responsibilities Our Property & Casualty Data Pipeline team is hiring an Experienced Data Engineer to help advance one of the company’s highest priorities. In this role, you will build and scale the data products and pipelines that power better business decision, profitable growth, and stronger customer retention. You will develop automated, reliable analytical data assets that help transform how we operate and how we serve our customers. This position sits within Enterprise Technology and offers the opportunity to work in a highly collaborative environment focused on delivering meaningful business impact. You will partner across teams, solve complex data challenges, and help shape modern data solutions aligned to enterprise goals. It is a strong fit for someone who wants to expand their technical skills, work on high-visibility initiatives, and contribute in a fast-moving, innovative setting. Your Responsibilities May Include : Utilizes industry-adopted languages and frameworks in coding, testing, security, DevOps, DataOps and data engineering practices Develops and maintains reusable, scalable, and compliant data solutions across multiple platforms and compute environments Responsible for the identification, acquisition, cleansing, profiling, and ETL (extracting, transformation, and loading) of data used in analytic discovery and production solution deployment across multiple platforms Establishes business domain knowledge for existing State Farm data sources and investigates, recommends, and initiates acquisition of data resources, both internal and external Identifies and consults on emerging technologies and critical core systems, including techniques, tools, data sources, and platforms in the data engineering field Familiar with handling datasets containing mixes of structured and unstructured data Exhibits DataOps mindset where team is accountable for ensuring data aligns to enterprise needs and leveraging automation to deliver quality data solutions Collects and analyzes information to identify customer's technical needs, suggest solutions and develop implementation and integration plans (e.g., technical proposals) Responsible for the analysis, design, deployment, support, and security of technology to ensure the organization is efficiently managing its technology and data-related assets in accordance with market best-practices and external regulations Applies a wide application of complex principles, theories, and concepts in computer science for data engineering solutions Qualifications We Are Seeking Candidates With: Minimum of 2-4 years of professional experience as a Data Engineer. Proficiency in programming languages such as Python, Spark SQL (or PySpark), R, Java, Bash, etc. Hands-on experience with AWS services including ETL tools (Glue, EMR Serverless), Lambda, Step Functions, EventBridge, S3, DynamoDB, Kinesis Firehose, Redshift, Iceberg, and SageMaker. Experience with distributed data processing frameworks such as Apache Spark, Databricks. Experience with infrastructure as code tools such as OpenTofu (formerly Terraform) for managing cloud resources and deployments. Familiarity with CI/CD pipelines including automated testing, security scans, and tools like Airflow. Additional Experience or ability to rapidly gain P&C data domain knowledge, including rating, underwriting, and/or claims. Experience with relational databases such as DB2, Postgres, Redshift, etc. Experience with version control systems such as GitHub or GitLab. Data access skills using SQL, and Athena. Experience in designing, building, and maintaining data pipelines for automated data processing. Knowledge of data modeling techniques such as star schema and snowflake schema, with an understanding of data architecture. Competencies Adaptability Work Ethic Critical Thinking Strategic Business Focus Technical/Functional Expertise Our Benefits Because work-life balance is a priority at State Farm, compensation is based on our standard 38:45-hour work week! Potential starting salary range: $85,500 - $115,000 Starting salary will be based on skills, background, and experience High end of the range limited to applicants with significant relevant experience Potential yearly incentive pay up to 15% of base salary At State Farm, we offer more than just a paycheck. Check out our suite of benefits designed to give you the flexibility you need to take care of you and your family! Get Paid! On top of our competitive pay, you are eligible for an annual raise and bonus. Stay Well! Focus on you and your family’s health with our robust health and wellbeing programs. State Farm pays most of your healthcare premium, and we offer multiple healthcare plan options, including a high deductible plan. All medical plans provide 100% coverage for in-network preventative care, AND you and your family have access to vision, dental, telemedicine, 24/7 mental health professionals, and much more! Develop and Grow! Take advantage of educational benefits like industry leading training programs, top-notch tuition assistance programs, employee resource groups, and mentoring. Plan Ahead! Plan for those big moments in life with benefits like fertility/IVF/adoption assistance, college coaching, national discount programs, interactive monthly financial workshops, free financial coaching, and more. You can also start a savings account or consider financing through our State Farm Federal Credit Union! Take a Little “You” Time! You will have access to our generous time off policies designed so you can plan around holidays, family events, volunteering, or just to take a relaxing day off. With the opportunity to initially earn up to 20 days annually plus parental leave, paid holidays, celebration day, life leave (40 hours/year), bereavement leave, and community service/education support days, there will be plenty of time for you! Give Back! We offer several ways to give back through our Matching Gift Program, Good Neighbor Grant Program, and the Employee Assistance Fund. Finish Strong! Plan for retirement using free financial advisors and a 401(k) plan with company contributions of up to 7% of your salary. Visit our State Farm Careers page for more information on our benefits , locations , and the hiring process of joining the State Farm team! Our Property & Casualty Data Pipeline team is hiring an Experienced Data Engineer to help advance one of the company's highest priorities. In this role, you will build and scale the data products and pipelines that power better business decision, profitable growth, and stronger customer retention. You will develop automated, reliable analytical data assets that help transform how we operate and how we serve our customers. This position sits within Enterprise Technology and offers the opportunity to work in a highly collaborative environment focused on delivering meaningful business impact. You will partner across teams, solve complex data challenges, and help shape modern data solutions aligned to enterprise goals. It is a strong fit for someone who wants to expand their technical skills, work on high-visibility initiatives, and contribute in a fast-moving, innovative setting. Your Responsibilities May Include : Utilizes industry-adopted languages and frameworks in coding, testing, security, DevOps, DataOps and data engineering practices Develops and maintains reusable, scalable, and compliant data solutions across multiple platforms and compute environments Responsible for the identification, acquisition, cleansing, profiling, and ETL (extracting, transformation, and loading) of data used in analytic discovery and production solution deployment across multiple platforms Establishes business domain knowledge for existing State Farm data sources and investigates, recommends, and initiates acquisition of data resources, both internal and external Identifies and consults on emerging technologies and critical core systems, including techniques, tools, data sources, and platforms in the data engineering field Familiar with handling datasets containing mixes of structured and unstructured data Exhibits DataOps mindset where team is accountable for ensuring data aligns to enterprise needs and leveraging automation to deliver quality data solutions Collects and analyzes information to identify customer's technical needs, suggest solutions and develop implementation and integration plans (e.g., technical proposals) Responsible for the analysis, design, deployment, support, and security of technology to ensure the organization is efficiently managing its technology and data-related assets in accordance with market best-practices and external regulations Applies a wide application of complex principles, theories, and concepts in computer science for data engineering solutions We Are Seeking Candidates With: Minimum of 2-4 years of professional experience as a Data Engineer. Proficiency in programming languages such as Python, Spark SQL (or PySpark), R, Java, Bash, etc. Hands-on experience with AWS services including ETL tools (Glue, EMR Serverless), Lambda, Step Functions, EventBridge, S3, DynamoDB, Kinesis Firehose, Redshift, Iceberg, and SageMaker. Experience with distributed data processing frameworks such as Apache Spark, Databricks. Experience with infrastructure as code tools such as OpenTofu (formerly Terraform) for managing cloud resources and deployments. Familiarity with CI/CD pipelines including automated testing, security scans, and tools like Airflow. Additional Experience or ability to rapidly gain P&C data domain knowledge, including rating, underwriting, and/or claims. Experience with relational databases such as DB2, Postgres, Redshift, etc. Experience with version control systems such as GitHub or GitLab. Data access skills using SQL, and Athena. Experience in designing, building, and maintaining data pipelines for automated data processing. Knowledge of data modeling techniques such as star schema and snowflake schema, with an understanding of data architecture. Competencies Adaptability Work Ethic Critical Thinking Strategic Business Focus Technical/Functional Expertise


