Aia
Data Engineer Consultant
Salary
Job description
MediCard Phils., Inc. is one of the country's leading HMO and the only HMO founded and run by Doctors. Since its inception, the concept of service-oriented total health care has been the molding ideal of MediCard. The competition is vast, and the benefits being offered by the competitors are tempting. However, MEDICard has taken the lead in providing innovative and productive ideas that cut down the cost of health maintenance without compromising its quality.
MediCard now boasts of more than half a million members and over 54,000 accredited doctors in over 1,000 hospitals and clinics nationwide. It also operates 16 MediCard free-standing clinics that provide services at par with those offered by hospitals minus the confinement.
MediCard is currently looking for assertive, dynamic and energetic individuals to fill up the following vacancy:Responsible for capturing data from various sources into the system for data analytics
Apply data extraction, transformation and loading techniques in order to connect large data sets from a variety of sources
Create data collection frameworks for structured and unstructured data
Develop and maintain infrastructure systems such as data warehouses, data lakes and data access application programming interfaces (APIs)
Define data standards, develop data catalogues and ensure data quality
(map Data Engineer, Data Management, Data Architect to this job profile)
- Data Platform Design aligned with Modern Data Architecture: · Design, build, and maintain scalable, secure, and high‑performance data platforms on Microsoft Azure. · Build end‑to‑end data solutions leveraging Azure Databricks, Azure Data Factory, Azure Data Lake, and related Azure services. · Ensure data designs follows best practices for reliability, cost optimization, performance, and extensibility.
- ETL / ELT Pipeline Development: · Develop, orchestrate, and maintain robust ETL/ELT pipelines using Azure Data Factory and Databricks. · Implement batch and incremental data ingestion from diverse sources including databases, APIs, SaaS platforms, and data streams. · Optimize pipelines for reliability, scalability, and performance while minimizing data latency.
- Databricks Engineering & Optimization: · Develop data transformation, enrichment, and aggregation logic using Databricks (PySpark, Spark SQL, Delta Lake). · Implement Delta Lake best practices including schema enforcement, versioning, time travel, and performance tuning. · Optimize Databricks clusters, jobs, and notebooks for cost efficiency and processing performance.
- Data Modeling & Storage: · Design and implement data models optimized for analytics, reporting, and downstream consumption. · Manage structured, semi‑structured, and unstructured data in Azure Data Lake and cloud‑based storage systems. · Apply partitioning, indexing, and compression strategies to improve query performance and storage efficiency.
- Data Quality & Reliability: · Implement data validation, quality checks, and reconciliation processes within ETL pipelines. · Monitor data freshness, completeness, accuracy, and consistency across data flows. · Troubleshoot and resolve data pipeline failures, data integrity issues, and performance bottlenecks.
- Cloud Security, Governance & Compliance: · Apply Azure security best practices including identity management, RBAC, network security, and encryption. · Ensure data pipelines and platforms comply with organizational governance standards and regulatory requirements. · Collaborate with security and platform teams to manage access control, secrets, and audit logging.
- Automation, Monitoring & DevOps: · Implement CI/CD pipelines for data engineering code and infrastructure using Azure DevOps or similar tools. · Automate deployments, testing, and configuration management for data platforms and pipelines. · Set up monitoring, alerting, and logging for Databricks jobs and ADF pipelines to ensure operational stability.
- Collaboration & Stakeholder Engagement: · Work closely with data analysts, data scientists, BI developers, and business stakeholders to understand data requirements. · Translate business needs into technical data solutions that support analytics and decision‑making. · Participate in design reviews, sprint planning, and cross‑functional working sessions.
- Documentation & Knowledge Management: · Document data architectures, pipeline designs, transformation logic, and operational procedures. · Maintain technical documentation for Databricks notebooks, ADF pipelines, and data models. · Contribute to shared repositories, best‑practice guides, and team knowledge bases.
- Continuous Improvement & Innovation: · Stay current with Azure, Databricks, and data engineering best practices and emerging technologies. · Proactively identify opportunities to improve performance, scalability, and cost efficiency. · Support modernization initiatives, cloud adoption, and advanced analytics use cases.You must provide all requested information, including Personal Data, to be considered for this career opportunity. Failure to provide such information may influence the processing and outcome of your application. You are responsible for ensuring that the information you submit is accurate and up-to-date.


