Responsibilities
Design, develop, and optimize robust data pipelines using PySpark and Python to process large datasets
Collaborate with cross-functional teams to understand complex business requirements and transform them into scalable technical solutions
Utilize Palantir Foundry to build and manage analytics applications that enable strategic and operational insights
Manage data integration workflows across distributed computing systems, building high-quality ETL/ELT processes
Develop advanced SQL queries for data querying, transformation, and warehousing
Stay engaged with Agile methodologies and participate in Scrum ceremonies to align work with the broader project goals
Document technical solutions and workflows to ensure knowledge sharing and long-term maintainability
Troubleshoot and resolve data processing or platform issues in a fast-paced, production-grade environment
Stay up-to-date with the latest advancements in cloud technologies, Big Data processing, and Machine Learning
Participate in code reviews and promote best practices in software engineering
Requirements
Bachelor’s degree in Computer Science, Data Science, or a related technical field
5+ years of experience in Data Integration, with a focus on large-scale distributed computing or analytics systems
Palantir Foundry expertise – prior hands-on experience is essential for this role
Strong proficiency in Python and PySpark for building scalable data workflows
In-depth knowledge and experience in SQL (preferably Spark SQL), enabling efficient data querying and warehousing
Experience designing and implementing ETL/ELT processes for large datasets
Solid understanding of Scrum and Agile development principles
Strong analytical and problem-solving skills, with a strategic mindset for tackling complex challenges
Highly self-driven and capable of managing workload independently while delivering on commitments
A collaborative mindset, paired with clear and effective communication skills, including experience working in global, multicultural settings
Eagerness to learn and stay current with emerging technologies and best practices in data engineering and analytics
Nice to have
Familiarity with insurance, financial industries, or finance-related data workflows
Knowledge of front-end technologies like HTML, CSS, JavaScript, and build tools like Gradle
Experience with Microsoft Power BI for building data dashboards and reports
Hands-on experience with Machine Learning or implementing Generative AI models
Understanding of statistical models and their applications in data pipelines
Exposure to Azure, AWS, or GCP cloud platforms, enabling high-quality engineering solutions
We offer/Benefits
We connect like-minded people:
We invest in your growth:
Discounts in local language schools, including offline courses for the Uzbek language
We cover it all: