Endowus
Endowus - Senior Data Engineer - Spark/Scala
Job Location
hyderabad, India
Job Description
Job Title : Senior Data Engineer About the Role : As a Senior Data Engineer, you will play a critical role in shaping the future of our data platform. You will lead the design, development, and implementation of highly scalable and reliable data systems that empower data-driven decision-making across the organization. You will work closely with cross-functional teams, including Product, Engineering, and Data Analytics, to deliver high-quality data solutions that meet our business needs. This role requires a strong technical background, excellent leadership skills, and a passion for data. Responsibilities & Ownership : - Technical Leadership : Lead the technical design, delivery, reliability, and security of our core data platform. - Collaboration : Work closely with the Product team, other Engineering teams, and stakeholders in Data Analytics, Growth, Marketing, Operations, Compliance, and IT Risk to achieve our business goals. - Quality & Efficiency : Strive for high levels of technical quality, reliability, and delivery efficiency. - Mentorship & Growth : Mentor and grow a small, talented team of junior data engineers, fostering a culture of learning and innovation. - Data Pipeline Development : Build and optimize data pipelines for data collection, transformation, and aggregation using Apache Flink or Apache Spark. - Data Integration : Integrate data sources using REST and streaming protocols, especially using Kafka. - Data Quality & Governance : Build systems and processes to handle data quality, data privacy, and data sovereignty requirements. Requirements : - Bachelors' or above in Computer Science, a related field, or equivalent professional experience. - At least 6 years experience in designing and implementing highly scalable, distributed data collection, aggregation, and analysis systems built for handling large volumes of data in the cloud. - Significant hands-on experience developing data pipelines with Apache Spark with Scala - At least 2 years experience as a tech lead facing business users directly and leading technical initiatives - Significant hands-on experience building and optimising data pipelines for data collection, transformation, aggregation in Apache Flink or Apache Spark, using dependency and workflow management tools such as Airflow, operating in a public cloud environment like AWS, GCP or Azure. - Advanced SQL knowledge and strong experience working with relational and non-relational databases. - Experience integrating BI tools such as Tableau, Mode, Looker, etc. - Experience integrating data sources using REST and streaming protocols, especially using Kafka. - Experience with building systems & processes to handle data quality, data privacy, and data sovereignty requirements. - Experience with agile processes, testing, CI/CD, and production error/metrics monitoring. - Self-driven with a strong sense of ownership. - Comfortable with numbers and motivated by steep learning curves. - Has a strong product sense and is empathetic to customers' experiences of using the product. Preferred Skills & Experience : - Domain experience in a B2C context is a strong plus. - Knowledge of finance, wealth, and trading domain. - Some exposure to CQRS / Event Sourcing patterns. - Experience with AWS or GCP, Cassandra, Kafka, Kubernetes, Terraform. (ref:hirist.tech)
Location: hyderabad, IN
Posted Date: 4/28/2025
Location: hyderabad, IN
Posted Date: 4/28/2025
Contact Information
Contact | Human Resources Endowus |
---|