By clicking “Accept Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info

Hexaview Logo
great place to work certified logo

BUILD YOUR SKILLS AND
YOUR CAREER

Join A Team That Celebrates You Daily!
Our people are not only our greatest asset but also our biggest competitive advantage. We don’t call our employees- employees, we call them associates.

Lead Data Engineer

Experience - 8+ Years

Location - Noida

Date of Posting - May 12th, 2025


Position Summary

We are looking for a Lead Data Engineer who has a strong background in designing, developing, and maintaining complex data systems. The ideal candidate will have experience working with large datasets, real-time data processing, and cloud technologies. You will work closely with data scientists, analysts, and software engineers to develop end-to-end data solutions that support business intelligence, analytics, and reporting. We are looking for advance data lake implementation capabilities like MS fabrics and AWS Lakehouse.


Key Responsibilities

  • Data Pipeline Development: Design, build, and maintain scalable data pipelines that efficiently ingest, process, and store data from various sources, ensuring high data quality and accessibility. 
  • Data Architecture & Modeling: Develop and implement data architecture strategies to support data warehousing, data lakes, and other big data initiatives. Design and optimize database structures and data models. 
  • ETL/ELT Processes: Lead the development of ETL/ELT workflows to transform raw data into actionable insights. Optimize existing workflows for performance and reliability. 
  • Collaboration with Cross-Functional Teams: Work closely with Data Scientists, Analysts, and Product Teams to understand business requirements and provide solutions that enable data-driven decision-making. 
  • Cloud Technologies: Implement and manage data solutions in cloud platforms (AWS, Azure, GCP), including leveraging services like S3, Redshift, BigQuery, and others for scalable data storage and processing. 
  • Data Quality & Governance: Ensure that data pipelines are compliant with data governance and quality standards. Implement monitoring and alerting systems to track data accuracy and availability. 
  • Optimization & Performance Tuning: Regularly optimize database performance, queries, and data pipelines for speed and cost-effectiveness. Troubleshoot performance issues and suggest improvements. 
  • Documentation & Reporting: Document data engineering processes, tools, and methodologies to maintain a clear and organized framework for team collaboration and knowledge sharing. 
  • Mentorship & Leadership: Mentor and provide technical guidance to junior engineers and other team members. Lead code reviews and help shape best practices for the engineering team.

Skills and Qualifications

  • Experience: 8+ years of experience in data engineering, data pipeline development, and data management in large-scale environments. 
  • Programming Languages: Expertise in Python, SQL, and familiarity with other programming languages like Java, Scala, or Go. 
  • Data Technologies: Strong experience with big data technologies such as Hadoop, Spark, Kafka, and related frameworks. 
  • Cloud Platforms: Proficient in cloud data services such as AWS, Google Cloud, or Azure (preferably AWS). 
  • Database Management: Solid experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). 
  • Data Warehousing & BI: Knowledge of data warehousing concepts and tools like Redshift, Snowflake, or BigQuery, and experience working with BI tools like Tableau, Looker, or Power BI. 
  • Data Modeling & Architecture: Strong understanding of data modeling, schema design, and architectural best practices for scalable data solutions. 
  • Version Control & CI/CD: Proficiency in version control tools (Git) and experience with CI/CD pipelines to ensure efficient and reliable deployments. 
  • Problem-Solving: Excellent troubleshooting skills and the ability to solve complex data engineering challenges. 
  • Communication: Strong verbal and written communication skills to interact with both technical and non-technical stakeholders. 


Technical Expertise

  • Programming Languages: Proficiency in Python, Java, or Scala. 
  • Big Data Technologies: Hands-on experience with tools like Hadoop, Spark, and Kafka. 
  • Cloud Platforms: Proficiency with AWS (e.g., S3, Redshift, Glue), Azure (e.g., Synapse, Data Factory), or GCP (e.g., BigQuery, Dataflow). 
  • Databases: Expertise in SQL Server, MySQL, PostgreSQL. 
  • Reporting: Expertise in Power BI, SSRS. 
  • Tools: Familiarity with orchestration tools like Apache Airflow and dbt. 
  • Technology – Understanding the latest trends in Data technology and updated on offerings like Databricks, MS Fabrics, AWS Sagemaker Lakehouse 


Soft Skills

  • Strong problem-solving and analytical abilities. 
  • Excellent communication and collaboration skills. 
  • Ability to mentor junior engineers and lead data engineering initiatives. 


Send us your resume at:

careers@hexaviewtech.com