Erm
Data Engineer - India GDC (Gurugram)
Salary
Job description
Who is ERM?
ERM is a leading global sustainability consulting firm, committed for nearly 50 years to helping organizations navigate complex environmental, social, and governance (ESG) challenges. We bring together a diverse and inclusive community of experts across regions and disciplines, providing a truly multicultural environment that fosters collaboration, professional growth, and meaningful global exposure. As a people-first organization, ERM values well-being, career development, and the power of collective expertise to drive sustainable impact for our clients—and the planet.
Introducing our new Global Delivery Centre (GDC)
Our Global Delivery Centre (GDC) in India is a unified platform designed to deliver high-value services and solutions to ERM’s global clientele. By centralizing key business and consulting functions, we streamline operations, optimize service delivery, and enable our teams to focus on what matters most—advising clients on sustainability challenges with agility and innovation. Through the GDC, you will collaborate with international teams, leverage emerging technologies, and further enhance ERM’s commitment to excellence—amplifying our shared mission to make a lasting, positive impact.
ERM is seeking a Data Engineer to join our Digital Products team. This is an exciting opportunity for an early- to mid-career data professional to support a global network of technical specialists and grow your skills within a purpose-driven organization.
RESPONSIBILITIES:
Work directly with project teams to solve their data problems, upskilling teams along the way so project teams can independently and autonomously solve these problems in the future.
Identify generic forms of requests to build tools and applications that consultants can use off the shelf for similar purposes in the future.
Understand requirements and own the execution of tasks while collaborating across the business.
Demonstrate and showcase capabilities using ERM's technology and AI tools, such as Microsoft Fabric and Copilot.
Create tools and resources to support project teams in achieving their goals independently.
Qualifications:
Bachelors degree qualified in science, engineering or mathematics
Job specific capabilities/skills:
Skills in data engineering and working with databases.
Experience in data science and coding in Python is favorable.
Experience with Microsoft Fabric highly regarded.
Experience with large language models (LLMs) or other natural language processing tools.
Ability to work with non-technical specialists to upskill or train them.
Strong communication skills and the ability to articulate complex scenarios effectively.
Ability to work in a complex, global, dynamic organization and be effective within matrixed reporting environments and multi-partner contexts.
Problem-solving skills and the ability to make decisions by assessing situations and selecting appropriate courses of action.
RESPONSIBILITIES:
Develop, test, and maintain scalable data pipelines in Azure and Microsoft Fabric.
Optimize data pipelines for performance, scalability, and reliability.
Evaluate and configure Azure storage (Blob, Data Lake, etc.).
Build and manage distributed processing pipelines to support large-scale data transformations using PySpark and parallel computing strategies.
Collaborate to translate prototype code into scalable, production-ready processes.
Document data engineering workflows and support internal data governance efforts.