About the Role
We are looking for a Data Engineer to help build and maintain modern data platforms and migrate data from legacy systems to Databricks. If you enjoy working with large-scale data, building reliable pipelines, and solving complex data problems, we would love to hear from you.
Responsibilities: - Build and maintain data pipelines and workflows in Databricks
- Write and optimize SQL queries
- Develop ETL/ELT processes using Python and PySpark
- Perform data migration from legacy systems (Oracle, SQL Server, etc.) to Databricks
- Design and implement end-to-end data pipelines
- Analyze and troubleshoot data-related issues, including performance and data quality
- Collaborate with data analysts and engineering teams
Requirements: - Strong SQL skills (core requirement)
- Experience with Python and PySpark
- Hands-on experience with Databricks
- Experience building data pipelines and workflows
- Experience in data migration projects (Oracle, SQL Server, or other legacy systems)
- Strong problem analysis and troubleshooting skills
- Ability to work under pressure and meet deadlines
- Experience in Healthcare or Insurance domains is a plus
We Offer: - Competitive salary: 600–800 USD (per project)
- Remote or hybrid work options (depending on your location)
- Opportunity to work on modern data platforms and large-scale data projects
- Professional growth and participation in challenging projects
- Friendly engineering team and supportive environment