Senior Data Engineering Lead

Location: Bengaluru / Bangalore (Karnataka)

Job Overview

Experience: 7.0 - 9.0 Years

Salary: As Per Industry

Gender: Both Male & Female

Function:

Job ID: 62599

Posted On: Oct 06, 2025

Valid Upto: Nov 05, 2025

Job Type: PermanentJob

PWD: 0

Primary Qualification

Qualification:
0

Course:
0

Specialization:
0

Job Description

Job Summary :

Location: Bangalore / Hyderabad / Noida (Hybrid 3 days in office),241

Experience: 7 9 Years

Salary Range: INR 24,00,000 30,00,000

Domain / Industry: Information Technology Data Engineering & Analytics Solutions

Notice Period: Immediate to 30 days preferred

Role Overview

We are seeking an experienced Senior Data Engineer who can design, develop, and optimize data pipelines, data lakes, and large-scale distributed systems. The role requires strong expertise in cloud platforms, ETL tools, data warehousing, and performance tuning. The ideal candidate should have hands-on experience with Python, PySpark, SQL, and AWS along with exposure to big data platforms like Databricks, Snowflake, BigQuery, and Delta Lake.

Key Responsibilities

  • Design & Develop Pipelines: Build scalable data pipelines for ingestion, transformation, and storage from multiple sources (databases, APIs, streaming).
  • Data Warehousing: Work with Snowflake, BigQuery, Delta Lake, and other lakehouse architectures to ensure performance, scalability, and cost optimization.
  • Cloud & ETL Tools: Leverage ETL/ELT frameworks such as Apache Airflow, Informatica, Talend, AWS Glue, Dataproc, or Azure ADF.
  • Programming & Scripting: Write clean, efficient code in Python, PySpark, and SQL for data manipulation and transformation.
  • Performance Tuning: Optimize pipelines for speed, cost efficiency, and minimal resource consumption.
  • DevOps & Infrastructure: Collaborate with DevOps teams for CI/CD integration, IaC (Terraform/CloudFormation), monitoring, and infrastructure provisioning.
  • Compliance & Security: Ensure pipelines meet data governance, audit, and security standards.
  • Collaboration: Work with architects, analysts, and business stakeholders to translate requirements into scalable solutions.
  • Mentorship: Guide and mentor junior engineers, ensuring adherence to best practices, coding standards, and certifications.

Must-Have Skills

  • Strong expertise in Data Lakes, AWS, Python, PySpark, SQL.
  • Solid experience in distributed data processing frameworks.
  • Proficiency in ETL tools (Airflow, Glue, Informatica, Dataproc, Talend, Azure ADF).
  • Hands-on experience in cloud data services (AWS, Azure, or GCP).
  • Good understanding of data modeling, schemas, and data warehouse optimization techniques.
  • Knowledge of data governance and compliance frameworks.

Nice-to-Have Skills

  • Experience with LLMs, SageMaker, RAG workflows, or advanced AI-driven data solutions.
  • Knowledge of NoSQL databases (e.g., DocumentDB) and columnar formats (Parquet, Iceberg).
  • Experience in data governance automation, retention, and audit frameworks.
  • Exposure to event-driven distributed systems and serverless architectures.

Key Skills

Data Lake | AWS | Python | PySpark | SQL | ETL Tools | Cloud Data Services | Data Warehousing

Interview Process

  • 2 Technical Rounds
  • 1 Client Discussion

Skills: etl,etl tools,sql,data,aws,pipelines
Company Info

Company: Monster.com (India) Private Limited

Type: IT-ITeS 

Contact Person: Foundit

Email: ixxx@foundit.in

Phone: 80xxxxx11

Website: https://www.foundit.in/

Address: Wing B, 6th Floor, Smartworks, Aurobindo Galaxy, Plot No 01, Sy. No 83/1, TSIIC, HITECH City, Raidurg, Hyderabad, Telangana, 500081