About
Results-driven SDE-II with over 4.5 years of hands-on experience, specializing in architecting and implementing high-performance ETL pipelines and real-time data frameworks. Proven ability to enhance business decision-making by 30% through ensuring data accuracy, integrity, and reliability using technologies like Kafka, Spark, and Python. Adept at leading complex project deliveries, transitioning data architectures, and optimizing marketing spend by 14%.
Work
Quince
|SDE - II (Data engineering)
Bengaluru, Karnataka, India
→
Summary
Leading end-to-end data modeling and pipeline development for E-commerce operations, enhancing data fetching efficiency and strategic decision-making.
Highlights
Led an end-to-end E-commerce data modeling project, foundational to Supply Chain operations, encompassing critical service data (orders, tax, payments, offers, payment gateway systems).
Developed an in-house feature to extract, process, and load DynamoDB data into Iceberg tables, reducing data fetching costs by 99% through a plug-and-play solution.
Engineered a generic, config-driven codebase for Spark-based ETL batch processing, enabling seamless data migration from Snowflake to Iceberg via configuration entries.
Identified data-driven logic for user interaction analysis, optimizing marketing spend on Meta platforms by 14%.
CoinDCX
|Senior Data Engineer
Bengaluru, Karnataka, India
→
Summary
Directed critical data engineering pipeline migrations from monolith to microservices, ensuring seamless data integration and rapid insight delivery.
Highlights
Led a high-impact migration project, successfully transitioning data engineering pipelines from a monolith to a microservices architecture.
Spearheaded data integration across legacy monolith and modern microservices systems, ensuring seamless interoperability and data flow.
Implemented robust data pipelines to deliver valuable business insights to business stakeholders within 30 minutes from multiple sources.
Prioritized and enforced data quality through rigorous validation and cleansing procedures, maintaining data integrity and reliability during migration.
Orchestrated a smooth architectural transition, minimizing business disruptions and ensuring uninterrupted access to critical data.
CoinDCX
|Data engineer
Bengaluru, Karnataka, India
→
Summary
Designed and optimized real-time and batch data pipelines, leveraging Kafka, Spark, and Delta Lake to enhance data delivery, efficiency, and quality.
Highlights
Designed and maintained performance-tuned real-time data pipelines using Kafka, Spark Streaming, and Delta Lake, delivering data in ~45 seconds and reducing computing resources by 25%.
Architected versatile data processing frameworks with Apache Spark, enabling pipeline creation with 90% less effort.
Applied advanced data modeling to optimize reporting and analytics, reducing compute costs by 40% and runtime by 30%.
Engineered robust data cleansing pipelines for unstructured data using advanced Spark functionality, ensuring scalability and fault tolerance.
Led a decentralized data project, managing end-to-end requirements gathering, planning, and execution.
Tata Consultancy Services
|Assistant System Engineer
Bengaluru, Karnataka, India
→
Summary
Monitored, debugged, and optimized production data pipelines, ensuring system stability and enhancing code observability in distributed environments.
Highlights
Monitored production data pipelines, proactively addressing failures and conducting performance analysis to resolve bottlenecks.
Investigated and resolved real-time code defects and production issues, ensuring seamless data processing workflows.
Collaborated with cross-functional teams to troubleshoot complex issues across distributed systems.
Implemented logging, error handling, and instrumentation techniques, significantly enhancing the observability and maintainability of production codebases.
Education
Samrat Ashok Technological Institute
→
Bachelor
Engineering (ECE)
Languages
English
Skills
Data Technologies & Tools
Apache Kafka, Apache Spark, Spark Structured Streaming, Data Extraction, Data Modeling, Data Quality, ETL, Iceberg, Pandas, Databricks, Snowflake.
Cloud Platforms
AWS Athena, AWS Glue, AWS S3, AWS DMS.
Programming & Databases
Python, SQL.
Methodologies & Project Management
Jira, Agile methodology, Project Planning, Requirement Gathering.
Professional Competencies
Problem-solving, Analytical Thinking, Perseverance, Teamwork, Leadership, Ownership.