Gigalogyinc

Gigalogyinc

Senior Data Engineer – Data Pipeline Development & Operational Support

Role

Senior Data Engineer – Data Pipeline Development & Operational Support

Location

Dhaka, bd

Job type

Full-time

Posted

Yesterday

Salary

Not disclosed by employer

Job description

Gigalogy Ltd. is seeking a highly skilled Senior Data Engineer to support the Operations & Maintenance (O&M) of modern data platforms.

This role focuses on data pipeline development in test environments, technical investigation, log analysis, and operational support activities. The position requires close collaboration with client project teams to deliver reliable and high-quality data solutions.

The ideal candidate will bring strong technical expertise, attention to detail, and the ability to work effectively in a structured and client-focused environment.
 

Key Responsibilities:

Data Pipeline Development

  • Design, build, and maintain ETL/ELT pipelines using non-production datasets
  • Implement data transformations and validation logic based on project requirements
  • Develop reusable components (scripts, workflows, notebooks) for deployment

 Log Analysis & Technical Investigation

  • Analyze logs to identify failures, anomalies, and performance issues
  • Conduct root cause analysis and prepare clear technical reports
  • Collaborate with client teams to support issue resolution

Operational Support (Backend / Technical Tasks)

  • Conduct sanity checks on test runs, schema changes, and data onboarding processes. 
  • Support regression testing and validation before production release. 
  • Maintain operational runbooks, technical documentation, and change logs. 

Client Collaboration

  • Work closely with the Client engineering team to clarify specifications and share progress.  
  • Provide technical deliverables (code, findings, documents) promptly and accurately.
  • Participate in regular sync meetings to align on tasks and priorities.
  • Minimum 5+ years of experience in data engineering or ETL/ELT development
  • Strong proficiency in SQL and Python
  • Hands-on experience with at least one cloud platform (AWS, Azure, or GCP)
  • Experience with data pipelines and workflow orchestration tools
  • Strong analytical and troubleshooting skills
  • Good command of English (written and verbal)
  • High level of accuracy, reliability, and process adherence

Preferred Qualifications:

  • Experience with Databricks, Apache Spark, or Delta Lake
  • Familiarity with CI/CD pipelines and DevOps practices
  • Experience in data onboarding or integration projects
  • Experience working with client-facing or cross-functional teams

Ideal Candidate Profile:

  • Detail-oriented with strong ownership and accountability
  • Proactive in identifying issues and clarifying requirements
  • Quick learner with adaptability to new tools and technologies
  • Comfortable working in a structured, delivery-focused environment
  • Strong documentation and communication skills

What We Offer:

  • An opportunity to work with a Tokyo-based startup and contribute to a truly innovative new AI-based service
  • Work with talented colleagues in a cooperative, people-focused environment, where your contributions will be recognized
  • The salary range is from 100,000 BDT to 180,000 BDT / month (Based on experience).
  • Salary review twice a year
  • Performance bonus twice a year
  • Complementary meals and snacks.
  • Comprehensive health insurance coverage

Working days: Sunday to Thursday. 5 days/week onsite.
Working hours: 9:00 am - 6:00 pm (BDST).
Location: 3rd & 4th Floor, House 1148, Road 9/A, Avenue 10, Mirpur DOHS, Dhaka-1216, Bangladesh.

Resume ExampleCover Letter Example

Explore more

Similar jobs