prodapt
Presales Engineers - Data
Company
Role
Presales Engineers - Data
Location
Job type
-
Posted
5 hours ago
Salary
Job description
Overview Presales Data Engineer – Data Modernisation & AI Location India (Bangalore / Hyderabad / Chennai / Pune — Flexible) Role Purpose Support the Presales Lead in crafting winning solutions for large-scale data modernisation and AI-ready platform transformation deals. This is a hands-on technical presales role that combines deep data engineering skills across Databricks, Snowflake, and Google Cloud with a strong understanding of AI/ML to build compelling solution architectures, proof-of-concepts, and deal collateral. Responsibilities Key Responsibilities Solution Engineering & Deal Support Build detailed solution architectures and technical proposals in support of RFPs, proactive pursuits, and strategic deals. Develop effort estimations, platform sizing, and cost models for data modernisation engagements. Create high-quality solution decks, architecture diagrams, and technical write-ups under tight deal timelines. Support the Presales Lead in client-facing workshops, demos, and architecture walkthroughs. Data Modernisation for AI Design migration and re-platforming approaches from legacy systems (Oracle, Teradata, Netezza, traditional ETL) to modern platforms. Build reference architectures for Data Warehouse → Lakehouse → AI-ready platform transformations. Develop code conversion strategies and automated migration frameworks. Ensure modernised data platforms are optimised for downstream AI/ML workloads, including feature engineering, model training pipelines, and serving layers. Platform Engineering – Databricks, Snowflake & Google Cloud Databricks: Lakehouse architecture, Delta Lake, Unity Catalog, MLflow, Databricks Workflows, and Spark-based processing. Snowflake: Data sharing, Snowpark, Streams & Tasks, dynamic tables, Snowflake Cortex, and governance features. Google Cloud Data & Analytics: BigQuery, Dataproc, Dataflow, Pub/Sub, Dataplex, Vertex AI integration, and Cloud Composer. Build reusable accelerators, templates, and demo environments across these platforms. AI & Machine Learning Integration Strong working knowledge of AI/ML concepts, including supervised/unsupervised learning, LLMs, RAG architectures, and GenAI application patterns. Design data pipelines and platform architectures that enable AI readiness — clean, governed, feature-rich, and accessible data. Demonstrate understanding of MLOps practices including model versioning, experiment tracking, and deployment pipelines. Support integration of AI capabilities into solution proposals, such as AI-assisted data quality, intelligent document processing, and predictive analytics use cases. Data Governance & Quality Design governance layers within modern platforms (Unity Catalog, Snowflake governance, GCP Dataplex). Define data quality frameworks, lineage tracking, and cataloguing strategies as part of solution designs. Incorporate privacy, compliance, and security best practices into architectures. Accelerator Development & Thought Leadership Build and maintain reusable presales assets: reference architectures, estimation templates, demo scripts, and proof-of-concept kits. Contribute to POVs, whitepapers, and technical blogs on data modernisation and AI topics. Stay current with platform releases, industry trends, and competitive landscape across Databricks, Snowflake, and Google Cloud. Collaboration Work closely with the Americas Presales Lead to ensure alignment on deal strategy and timelines. Coordinate with delivery teams to validate solution feasibility and transition smoothly from presales to execution. Engage with Databricks, Snowflake, and Google Cloud partner teams for joint solutioning and co-selling activities. Requirements Required Skills & Experience 8–15 years of experience in data engineering, data platforms, or analytics, with at least 2 years in a presales or solutioning capacity. Hands-on expertise in at least two of the three core platforms: Databricks, Snowflake, and Google Cloud Data & Analytics. Proven experience in data warehouse modernisation — migration from legacy platforms (Oracle, Teradata, Netezza) to cloud-native architectures. Strong knowledge of AI/ML fundamentals, including GenAI, LLMs, RAG, and MLOps. Ability to articulate how data platforms enable AI outcomes is essential. Proficiency in SQL, Python, and Spark. Familiarity with infrastructure-as-code (Terraform) is a plus. Experience creating solution architectures, technical proposals, and effort estimations for large deals. Excellent communication skills — ability to articulate technical solutions to both technical and business audiences. Preferred Qualifications Certifications in Databricks (Data Engineer Associate/Professional), Snowflake (SnowPro Core/Advanced), or Google Cloud (Professional Data Engineer / ML Engineer). Experience working with Americas or global clients across time zones. Exposure to FinOps and cloud cost optimisation strategies. Background in telecom, BFSI, or enterprise verticals. Familiarity with data mesh and domain-driven data product architectures. Key Responsibilities Solution Engineering & Deal Support Build detailed solution architectures and technical proposals in support of RFPs, proactive pursuits, and strategic deals. Develop effort estimations, platform sizing, and cost models for data modernisation engagements. Create high-quality solution decks, architecture diagrams, and technical write-ups under tight deal timelines. Support the Presales Lead in client-facing workshops, demos, and architecture walkthroughs. Data Modernisation for AI Design migration and re-platforming approaches from legacy systems (Oracle, Teradata, Netezza, traditional ETL) to modern platforms. Build reference architectures for Data Warehouse → Lakehouse → AI-ready platform transformations. Develop code conversion strategies and automated migration frameworks. Ensure modernised data platforms are optimised for downstream AI/ML workloads, including feature engineering, model training pipelines, and serving layers. Platform Engineering - Databricks, Snowflake & Google Cloud Databricks: Lakehouse architecture, Delta Lake, Unity Catalog, MLflow, Databricks Workflows, and Spark-based processing. Snowflake: Data sharing, Snowpark, Streams & Tasks, dynamic tables, Snowflake Cortex, and governance features. Google Cloud Data & Analytics: BigQuery, Dataproc, Dataflow, Pub/Sub, Dataplex, Vertex AI integration, and Cloud Composer. Build reusable accelerators, templates, and demo environments across these platforms. AI & Machine Learning Integration Strong working knowledge of AI/ML concepts, including supervised/unsupervised learning, LLMs, RAG architectures, and GenAI application patterns. Design data pipelines and platform architectures that enable AI readiness - clean, governed, feature-rich, and accessible data. Demonstrate understanding of MLOps practices including model versioning, experiment tracking, and deployment pipelines. Support integration of AI capabilities into solution proposals, such as AI-assisted data quality, intelligent document processing, and predictive analytics use cases. Data Governance & Quality Design governance layers within modern platforms (Unity Catalog, Snowflake governance, GCP Dataplex). Define data quality frameworks, lineage tracking, and cataloguing strategies as part of solution designs. Incorporate privacy, compliance, and security best practices into architectures. Accelerator Development & Thought Leadership Build and maintain reusable presales assets: reference architectures, estimation templates, demo scripts, and proof-of-concept kits. Contribute to POVs, whitepapers, and technical blogs on data modernisation and AI topics. Stay current with platform releases, industry trends, and competitive landscape across Databricks, Snowflake, and Google Cloud. Collaboration Work closely with the Americas Presales Lead to ensure alignment on deal strategy and timelines. Coordinate with delivery teams to validate solution feasibility and transition smoothly from presales to execution. Engage with Databricks, Snowflake, and Google Cloud partner teams for joint solutioning and co-selling activities. Required Skills & Experience 8-15 years of experience in data engineering, data platforms, or analytics, with at least 2 years in a presales or solutioning capacity. Hands-on expertise in at least two of the three core platforms: Databricks, Snowflake, and Google Cloud Data & Analytics. Proven experience in data warehouse modernisation - migration from legacy platforms (Oracle, Teradata, Netezza) to cloud-native architectures. Strong knowledge of AI/ML fundamentals, including GenAI, LLMs, RAG, and MLOps. Ability to articulate how data platforms enable AI outcomes is essential. Proficiency in SQL, Python, and Spark. Familiarity with infrastructure-as-code (Terraform) is a plus. Experience creating solution architectures, technical proposals, and effort estimations for large deals. Excellent communication skills - ability to articulate technical solutions to both technical and business audiences. Preferred Qualifications Certifications in Databricks (Data Engineer Associate/Professional), Snowflake (SnowPro Core/Advanced), or Google Cloud (Professional Data Engineer / ML Engineer). Experience working with Americas or global clients across time zones. Exposure to FinOps and cloud cost optimisation strategies. Background in telecom, BFSI, or enterprise verticals. Familiarity with data mesh and domain-driven data product architectures.
Explore more
Similar jobs
Fullstack Developer - JS/TS
Banyansoftware
Delphi Developer
Sutherland
Senior Software Engineer - Backend Developer
viasat
AWS Connect Developer (IBM ODM)
Miratech1
Java Fullstack Developer
Crux1
Software Developer
Trsformsservicespvtltd