docusign
SDET
Salary
Job description
Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do We are seeking a skilled and independent Software Development Engineer in Test (SDET) at the P3 (Career) level to join our Quality Engineering team in Bengaluru. In this role, you will design, develop, and maintain scalable test automation frameworks across enterprise SaaS platforms — including Salesforce, Oracle ERP, MuleSoft integrations, and UiPath workflow automation — ensuring the highest levels of product quality across the end-to-end digital and direct customer journey. A critical differentiator of this role is your ability to independently apply AI testing methodologies — including LLM evaluation, prompt testing, hallucination detection, and AI-assisted STLC practices — to ensure quality for both traditional and AI-powered product features. You will leverage Docusign's internal AI quality tooling (including the Quality Agent powered by CrewAI) and evaluator frameworks (such as Arize AX) to accelerate test delivery and improve coverage consistency. This is an individual contributor role reporting to the Senior Manager, SDET. Responsibility Design, develop, and maintain scalable and reusable test automation frameworks for enterprise applications across Salesforce Sales, Service & Experience Clouds, Oracle ERP, and MuleSoft integrations Independently own test coverage across multiple product components including UI, API, backend, data validation, integration, and regression testing Develop and execute comprehensive test plans, test cases, and test scripts for end-to-end business flows aligned to acceptance criteria Drive shift-left quality practices by partnering with developers and platform architects early in the SDLC to identify testability gaps and integrate quality into design Own test coverage metrics, dashboarding, and quality reporting to leadership Perform exploratory, functional, SIT, regression, and UAT testing for critical releases; support production smoke testing and hypercare post go-live Identify requirement gaps, provide robust edge cases and error-handling scenarios beyond documented functionality, and contribute to continuous quality improvemen Apply Docusign's AI testing framework to evaluate LLM-powered features and agentic AI workflows (e.g., Quality Agent, XDR Agents, Leads Agent) Design and execute prompt test suites to validate LLM output accuracy, consistency, tone, and alignment with expected outcomes across diverse input scenarios Conduct hallucination detection and post-processing fact-checking for LLM-generated content; validate that AI outputs are grounded in retrieved context Validate training/validation datasets for missing values, data leakage, class imbalance, duplicates, and biased sampling Execute model quality testing using metrics such as accuracy, precision/recall, F1, ROC-AUC, and regression error (MAE/RMSE) as applicable to the model type Verify model behavior under noisy, incomplete, adversarial, or out-of-distribution inputs Validate prompt-injection resistance, jailbreak detection, harmful/toxic content filtering, and secure handling of sensitive data and PII Implement AI evaluator frameworks using tools such as Arize AX for offline evaluation (datasets, experiments) and online testing (tracing, drift monitoring, confidence scoring) Track hallucination rates, latency variability, token cost, and model confidence thresholds in collaboration with MLOps teams Apply human-in-the-loop (HITL) evaluation practices to systematically assess AI output quality — helpfulness, correctness, compliance — where automated metrics are insufficient Leverage Docusign's internal Quality Agent (CrewAI-powered, Azure OpenAI-backed) to accelerate test case generation from Jira user stories, reducing manual effort and improving STLC throughput Test retrieval relevance, semantic search accuracy (Azure AI Search), context window utilization, and end-to-end response quality for RAG-based AI features Build and maintain robust, reusable automation suites for functional, regression, integration, performance, and data validation testing Automate end-to-end business flows using UiPath or equivalent low-code/RPA automation platforms; build solutions that reduce manual effort for repetitive processes Integrate automated tests into CI/CD pipelines (GitLab CI or equivalent) to support rapid, reliable releases and continuous quality gates Develop and maintain data validation automation scripts; perform source-to-target validation, data quality checks (accuracy, completeness, consistency, timeliness), and business rule validations on platforms such as Snowflake Implement performance and load testing strategies (e.g., JMeter, k6) for high-volume data operations and enterprise application integrations Conduct code reviews with a focus on test quality, reliability, and maintainability; contribute to continuous improvement of the automation system Partner closely with product managers, business analysts, developers, and platform architects to align testing strategy to business outcomes and delivery goals Participate in requirements workshops, sprint ceremonies, change management processes, and go-live support activities Proactively communicate quality risks, test coverage gaps, defect trends, and improvement recommendations to team and leadership Mentor junior SDET and QA team members in automation best practices, AI testing tools, and low-code platform usage Contribute to standardizing quality practices and reporting on overall application quality metrics and KPIs across programs (Digital, Direct, Customer Success, GTM) Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor's degree in Computer Science, Engineering, or a related field (Master's preferred) 5+ years of experience in QA/SDET roles (or 3+ years with a Master's degree), with at least 3 years focused on test automation for enterprise SaaS applications Strong proficiency in Python for building automation frameworks and AI evaluation scripts Hands-on experience with UI automation tools (UiPath, Selenium, Playwright) and API testing tools (Postman, RestAssured) Hands-on experience with Salesforce platform testing, including Apex testing across Sales, Service, and Experience Clouds Experience integrating automation into CI/CD pipelines (GitLab CI or equivalent) with Git-based version control Strong SQL proficiency for data validation, source-to-target testing, and business rule verification Working knowledge of AI/LLM testing fundamentals: prompt testing, hallucination detection, model quality metrics (F1, accuracy, precision/recall), and AI safety testing concepts Strong understanding of QA best practices, testing methodologies, and Agile/Scrum frameworks Proven experience with Jira and Zephyr Scale for test management and traceability Preferred Hands-on experience with Arize AX or similar LLM observability/evaluation platforms (offline datasets, experiments, online tracing, and drift monitoring) Experience with CrewAI, LangChain, or similar agentic AI orchestration frameworks and approaches to testing multi-agent workflows Familiarity with RAG pipeline testing: retrieval relevance evaluation, vector database validation (Azure AI Search), and semantic search quality assessment Experience leveraging Docusign's internal Quality Agent for AI-assisted test case generation and STLC automation Knowledge of MLOps principles and testing methodologies for validating and monitoring ML models in production (e.g., data drift detection, model bias testing, A/B evaluation) Experience with performance testing tools such as JMeter, k6, Gatling, or Blazemeter Active Salesforce certifications (e.g., Platform Developer I, Administrator)Strong defect triage, debugging, and root cause analysis skills Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice #LI-Hybrid #LI-SA4 We are seeking a skilled and independent Software Development Engineer in Test (SDET) at the P3 (Career) level to join our Quality Engineering team in Bengaluru. In this role, you will design, develop, and maintain scalable test automation frameworks across enterprise SaaS platforms — including Salesforce, Oracle ERP, MuleSoft integrations, and UiPath workflow automation — ensuring the highest levels of product quality across the end-to-end digital and direct customer journey. A critical differentiator of this role is your ability to independently apply AI testing methodologies — including LLM evaluation, prompt testing, hallucination detection, and AI-assisted STLC practices — to ensure quality for both traditional and AI-powered product features. You will leverage Docusign's internal AI quality tooling (including the Quality Agent powered by CrewAI) and evaluator frameworks (such as Arize AX) to accelerate test delivery and improve coverage consistency. This is an individual contributor role reporting to the Senior Manager, SDET. Responsibility Design, develop, and maintain scalable and reusable test automation frameworks for enterprise applications across Salesforce Sales, Service & Experience Clouds, Oracle ERP, and MuleSoft integrations Independently own test coverage across multiple product components including UI, API, backend, data validation, integration, and regression testing Develop and execute comprehensive test plans, test cases, and test scripts for end-to-end business flows aligned to acceptance criteria Drive shift-left quality practices by partnering with developers and platform architects early in the SDLC to identify testability gaps and integrate quality into design Own test coverage metrics, dashboarding, and quality reporting to leadership Perform exploratory, functional, SIT, regression, and UAT testing for critical releases; support production smoke testing and hypercare post go-live Identify requirement gaps, provide robust edge cases and error-handling scenarios beyond documented functionality, and contribute to continuous quality improvemen Apply Docusign's AI testing framework to evaluate LLM-powered features and agentic AI workflows (e.g., Quality Agent, XDR Agents, Leads Agent) Design and execute prompt test suites to validate LLM output accuracy, consistency, tone, and alignment with expected outcomes across diverse input scenarios Conduct hallucination detection and post-processing fact-checking for LLM-generated content; validate that AI outputs are grounded in retrieved context Validate training/validation datasets for missing values, data leakage, class imbalance, duplicates, and biased sampling Execute model quality testing using metrics such as accuracy, precision/recall, F1, ROC-AUC, and regression error (MAE/RMSE) as applicable to the model type Verify model behavior under noisy, incomplete, adversarial, or out-of-distribution inputs Validate prompt-injection resistance, jailbreak detection, harmful/toxic content filtering, and secure handling of sensitive data and PII Implement AI evaluator frameworks using tools such as Arize AX for offline evaluation (datasets, experiments) and online testing (tracing, drift monitoring, confidence scoring) Track hallucination rates, latency variability, token cost, and model confidence thresholds in collaboration with MLOps teams Apply human-in-the-loop (HITL) evaluation practices to systematically assess AI output quality — helpfulness, correctness, compliance — where automated metrics are insufficient Leverage Docusign's internal Quality Agent (CrewAI-powered, Azure OpenAI-backed) to accelerate test case generation from Jira user stories, reducing manual effort and improving STLC throughput Test retrieval relevance, semantic search accuracy (Azure AI Search), context window utilization, and end-to-end response quality for RAG-based AI features Build and maintain robust, reusable automation suites for functional, regression, integration, performance, and data validation testing Automate end-to-end business flows using UiPath or equivalent low-code/RPA automation platforms; build solutions that reduce manual effort for repetitive processes Integrate automated tests into CI/CD pipelines (GitLab CI or equivalent) to support rapid, reliable releases and continuous quality gates Develop and maintain data validation automation scripts; perform source-to-target validation, data quality checks (accuracy, completeness, consistency, timeliness), and business rule validations on platforms such as Snowflake Implement performance and load testing strategies (e.g., JMeter, k6) for high-volume data operations and enterprise application integrations Conduct code reviews with a focus on test quality, reliability, and maintainability; contribute to continuous improvement of the automation system Partner closely with product managers, business analysts, developers, and platform architects to align testing strategy to business outcomes and delivery goals Participate in requirements workshops, sprint ceremonies, change management processes, and go-live support activities Proactively communicate quality risks, test coverage gaps, defect trends, and improvement recommendations to team and leadership Mentor junior SDET and QA team members in automation best practices, AI testing tools, and low-code platform usage Contribute to standardizing quality practices and reporting on overall application quality metrics and KPIs across programs (Digital, Direct, Customer Success, GTM) Basic Bachelor's degree in Computer Science, Engineering, or a related field (Master's preferred) 5+ years of experience in QA/SDET roles (or 3+ years with a Master's degree), with at least 3 years focused on test automation for enterprise SaaS applications Strong proficiency in Python for building automation frameworks and AI evaluation scripts Hands-on experience with UI automation tools (UiPath, Selenium, Playwright) and API testing tools (Postman, RestAssured) Hands-on experience with Salesforce platform testing, including Apex testing across Sales, Service, and Experience Clouds Experience integrating automation into CI/CD pipelines (GitLab CI or equivalent) with Git-based version control Strong SQL proficiency for data validation, source-to-target testing, and business rule verification Working knowledge of AI/LLM testing fundamentals: prompt testing, hallucination detection, model quality metrics (F1, accuracy, precision/recall), and AI safety testing concepts Strong understanding of QA best practices, testing methodologies, and Agile/Scrum frameworks Proven experience with Jira and Zephyr Scale for test management and traceability Preferred Hands-on experience with Arize AX or similar LLM observability/evaluation platforms (offline datasets, experiments, online tracing, and drift monitoring) Experience with CrewAI, LangChain, or similar agentic AI orchestration frameworks and approaches to testing multi-agent workflows Familiarity with RAG pipeline testing: retrieval relevance evaluation, vector database validation (Azure AI Search), and semantic search quality assessment Experience leveraging Docusign's internal Quality Agent for AI-assisted test case generation and STLC automation Knowledge of MLOps principles and testing methodologies for validating and monitoring ML models in production (e.g., data drift detection, model bias testing, A/B evaluation) Experience with performance testing tools such as JMeter, k6, Gatling, or Blazemeter Active Salesforce certifications (e.g., Platform Developer I, Administrator)Strong defect triage, debugging, and root cause analysis skills


