docusign

docusign

Data Engineer

Company

docusign

Role

Data Engineer

Job type

Full-time

Posted

2 days ago

Share this job

Salary

Not disclosed by employer

Job description

Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do Docusign is seeking a talented and results-oriented Data + AI Engineer to design and deliver AI-native data solutions that provide trusted, actionable insights to the business. As a member of the Global Data Analytics (GDA) team, you will build and operate scalable data and AI pipelines on Snowflake and AWS, orchestrated with dbt, Airflow, Fivetran, and Hightouch, while integrating GenAI tools such as Claude, Glean, Gemini, Snowflake Intelligence, and enterprise copilots into day-to-day engineering workflows. On a typical day, you will develop agentic flows (e.g., Crew AI–style orchestration), ship new data and AI features for domains like MDM, Marketing, Finance, and Customer Success, and transform complex datasets into high-quality, “AI-ready” models that power analytics, telemetry, and decision intelligence. The ideal candidate brings a positive “can-do” attitude, a passion for learning and experimentation with new AI capabilities, and the drive to deliver high-impact solutions in partnership with a world-class team. This position is an individual contributor role reporting to the Manager Data Engineering. Responsibility Champion an AI-first mindset, embedding AI into data workflows, internal tools, and business processes Design and maintain AI-native data solutions and scalable data/AI pipelines for analytics, RAG, fine-tuning, and copilots Build and operate agentic workflows (e.g., Crew AI, LangGraph, AutoGen) for complex multi-step business processes Develop and run LLM-integrated pipelines using Claude, Glean, Gemini, Snowflake Cortex and enterprise copilots Define prompt engineering patterns, evaluation methods, and guardrails for production AI applications Translate business problems into data + AI solutions that deliver clear, actionable business insights Design robust data architectures spanning batch, streaming, and AI inference layers Build Snowflake-based pipelines using Python, dbt, and Airflow for analytics and AI workloads Implement and maintain data ingestion and reverse-ETL using Fivetran and Hightouch Model data for MDM, Marketing, Finance, and Customer Success (adoption, retention, telemetry) Manage AWS infrastructure (S3, EC2, IAM, Lambda, Step Functions, Glue) for data and AI platforms Enforce data privacy, governance, and compliance, including PII controls in Snowflake and AWS Implement data and AI observability with tools like Monte Carlo and Datadog Optimize SQL models and Python transformations to support feature stores and data products Apply CI/CD, Git, testing, and code review practices to data and AI engineering Partner with stakeholders across functions to gather requirements and design end-to-end data + AI architectures Own and monitor solutions to meet SLAs, SLOs, data freshness, and business KPIs Maintain clear documentation for data platforms, AI systems, and architecture Troubleshoot and resolve data and AI issues, driving root-cause fixes Deliver work using Agile/Scrum, acting as a strong team player and mentor Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor’s Degree in Computer Science, Data Analytics, Information Systems, or a related technical field 8+ years of combined Data Engineering and AI/ML Engineering experience in production environments Expert-level SQL and Snowflake skills, including performance tuning and dimensional/relational modeling Strong Python skills for data pipelines and AI integration Proven experience building and shipping agentic AI workflows (e.g., Crew AI, LangGraph, AutoGen or similar) in production Experience with AI productivity and enterprise tools such as Claude, Glean, Gemini, Snowflake Cortex, GitHub Copilot, dbt Copilot, and other copilots Hands-on experience with Airflow and dbt, plus Fivetran (ingestion) and Hightouch (reverse-ETL) Preferred 8+ years of dimensional and relational data modeling and OLAP data warehousing (Snowflake primary; Teradata/Redshift or similar a plus) 8+ years delivering ETL/ELT solutions from databases, SaaS platforms, APIs, flat files, and JSON using dbt, Matillion, and custom pipelines Deep understanding of GenAI application frameworks and prompt engineering, including RAG, semantic search, vector stores, and embedding pipelines Strong functional experience in MDM, Marketing attribution, Finance (AP/AR, invoicing), or Customer Success (adoption, retention, telemetry) Experience with data observability platforms such as Monte Carlo, Datadog, or Great Expectations Familiarity with data privacy regulations (e.g., GDPR, CCPA) and implementation of PII masking and role-based access controls Experience with AWS (S3, EC2, IAM, Lambda, Step Functions, Glue) and modern CI/CD practices (Git, pipelines, automated testing) Experience building BI dashboards and data products using Tableau, Looker, or equivalent Experience with transactional databases (OLTP) such as Oracle, SQL Server, or MySQL Experience creating ERDs with tools like SQLDBM, Erwin, dbdiagram.io, or equivalent Experience using Jira and Confluence in Scrum/Agile teams to manage AI/data delivery Demonstrated ability to mentor and coach engineers in AI and data engineering, acting as a force multiplier for the team Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice #LI-Hybrid #LI-SA4 Docusign is seeking a talented and results-oriented Data + AI Engineer to design and deliver AI-native data solutions that provide trusted, actionable insights to the business. As a member of the Global Data Analytics (GDA) team, you will build and operate scalable data and AI pipelines on Snowflake and AWS, orchestrated with dbt, Airflow, Fivetran, and Hightouch, while integrating GenAI tools such as Claude, Glean, Gemini, Snowflake Intelligence, and enterprise copilots into day-to-day engineering workflows. On a typical day, you will develop agentic flows (e.g., Crew AI–style orchestration), ship new data and AI features for domains like MDM, Marketing, Finance, and Customer Success, and transform complex datasets into high-quality, “AI-ready” models that power analytics, telemetry, and decision intelligence. The ideal candidate brings a positive “can-do” attitude, a passion for learning and experimentation with new AI capabilities, and the drive to deliver high-impact solutions in partnership with a world-class team. This position is an individual contributor role reporting to the Manager Data Engineering. Responsibility Champion an AI-first mindset, embedding AI into data workflows, internal tools, and business processes Design and maintain AI-native data solutions and scalable data/AI pipelines for analytics, RAG, fine-tuning, and copilots Build and operate agentic workflows (e.g., Crew AI, LangGraph, AutoGen) for complex multi-step business processes Develop and run LLM-integrated pipelines using Claude, Glean, Gemini, Snowflake Cortex and enterprise copilots Define prompt engineering patterns, evaluation methods, and guardrails for production AI applications Translate business problems into data + AI solutions that deliver clear, actionable business insights Design robust data architectures spanning batch, streaming, and AI inference layers Build Snowflake-based pipelines using Python, dbt, and Airflow for analytics and AI workloads Implement and maintain data ingestion and reverse-ETL using Fivetran and Hightouch Model data for MDM, Marketing, Finance, and Customer Success (adoption, retention, telemetry) Manage AWS infrastructure (S3, EC2, IAM, Lambda, Step Functions, Glue) for data and AI platforms Enforce data privacy, governance, and compliance, including PII controls in Snowflake and AWS Implement data and AI observability with tools like Monte Carlo and Datadog Optimize SQL models and Python transformations to support feature stores and data products Apply CI/CD, Git, testing, and code review practices to data and AI engineering Partner with stakeholders across functions to gather requirements and design end-to-end data + AI architectures Own and monitor solutions to meet SLAs, SLOs, data freshness, and business KPIs Maintain clear documentation for data platforms, AI systems, and architecture Troubleshoot and resolve data and AI issues, driving root-cause fixes Deliver work using Agile/Scrum, acting as a strong team player and mentor Basic Bachelor’s Degree in Computer Science, Data Analytics, Information Systems, or a related technical field 8+ years of combined Data Engineering and AI/ML Engineering experience in production environments Expert-level SQL and Snowflake skills, including performance tuning and dimensional/relational modeling Strong Python skills for data pipelines and AI integration Proven experience building and shipping agentic AI workflows (e.g., Crew AI, LangGraph, AutoGen or similar) in production Experience with AI productivity and enterprise tools such as Claude, Glean, Gemini, Snowflake Cortex, GitHub Copilot, dbt Copilot, and other copilots Hands-on experience with Airflow and dbt, plus Fivetran (ingestion) and Hightouch (reverse-ETL) Preferred 8+ years of dimensional and relational data modeling and OLAP data warehousing (Snowflake primary; Teradata/Redshift or similar a plus) 8+ years delivering ETL/ELT solutions from databases, SaaS platforms, APIs, flat files, and JSON using dbt, Matillion, and custom pipelines Deep understanding of GenAI application frameworks and prompt engineering, including RAG, semantic search, vector stores, and embedding pipelines Strong functional experience in MDM, Marketing attribution, Finance (AP/AR, invoicing), or Customer Success (adoption, retention, telemetry) Experience with data observability platforms such as Monte Carlo, Datadog, or Great Expectations Familiarity with data privacy regulations (e.g., GDPR, CCPA) and implementation of PII masking and role-based access controls Experience with AWS (S3, EC2, IAM, Lambda, Step Functions, Glue) and modern CI/CD practices (Git, pipelines, automated testing) Experience building BI dashboards and data products using Tableau, Looker, or equivalent Experience with transactional databases (OLTP) such as Oracle, SQL Server, or MySQL Experience creating ERDs with tools like SQLDBM, Erwin, dbdiagram.io, or equivalent Experience using Jira and Confluence in Scrum/Agile teams to manage AI/data delivery Demonstrated ability to mentor and coach engineers in AI and data engineering, acting as a force multiplier for the team

Resume ExampleCover Letter Example

Explore more