Ashish M G

Senior Data Engineer at HelloFresh
  • Claim this Profile
Contact Information
us****@****om
(386) 825-5501
Location
Berlin, Germany, DE
Languages
  • Hindi -
  • English -
  • Tamil -

Topline Score

Topline score feature will be out soon.

Bio

Generated by
Topline AI

5.0

/5.0
/ Based on 2 ratings
  • (2)
  • (0)
  • (0)
  • (0)
  • (0)

Filter reviews by:

Sujesh Chirackkal

I was more than lucky to have Ashish in my team. He is a hard core technical guy and very passionate about engineering. I worked with him in three projects and he was my go to person for all technical challenges. He has got great hands on in both data engineering and devops. it is fortunate to have someone like him in any team. Building frameworks and libraries is one of his key strengths.

Medona Jacob

Ashish is a data polyglot with very strong credentials in cloud and data engineering. The work he did in building a delta detection framework was exceptional and spearheaded many client initiatives. He is also resourceful, efficient, a critical thinker and a great team player. Ashish would be an invaluable asset to any data engineering team. It was a pleasure working with him.

You need to have a working account to view this content.
You need to have a working account to view this content.

Credentials

  • Microsoft Certified: Azure Data Fundamentals
    Microsoft
    May, 2021
    - Nov, 2024
  • Scrum Foundation Professional Certification (SFPC)
    CertiProf
    Jul, 2020
    - Nov, 2024
  • EY Analytics - Data integration - Bronze
    EY
    Jun, 2020
    - Nov, 2024
  • Spark - Level 1
    IBM
    Apr, 2020
    - Nov, 2024
  • Getting Started With Apache Spark SQL
    Databricks
    Mar, 2020
    - Nov, 2024
  • The Python Bible
    Udemy
    Dec, 2019
    - Nov, 2024
  • Oracle PL/SQL Developer Certified Associate
    Oracle
    Feb, 2017
    - Nov, 2024
  • Academy Accreditation - Databricks Lakehouse Fundamentals
    Databricks
    Jun, 2022
    - Nov, 2024
  • Microsoft Certified: Azure Data Engineer Associate
    Microsoft
    Jun, 2022
    - Nov, 2024
  • AWS Certified Solutions Architect – Associate
    Amazon Web Services (AWS)
    Mar, 2020
    - Nov, 2024

Experience

    • Germany
    • Consumer Services
    • 700 & Above Employee
    • Senior Data Engineer
      • Mar 2023 - Present

    • Senior Data Engineer
      • Dec 2022 - Feb 2023

    • United Kingdom
    • IT Services and IT Consulting
    • 700 & Above Employee
    • Senior Data Engineer (Senior-1) - Big Data
      • Oct 2022 - Nov 2022

    • Lead Analyst ( Data Engineer- Big Data )
      • Sep 2021 - Oct 2022

      Developed transformation framework that accepts configs as Json and converts the json to generate spark sql code to create target data. This is adopted in EY Solutions Hub as an accelerator. This accelerator will enable non spark users to easily define the ETL transformations using simple json or api request ========================== Worked for a Australian Online Retail Client to build a big data platform which would ingest from multiple source systems using Azure Services (… Show more Developed transformation framework that accepts configs as Json and converts the json to generate spark sql code to create target data. This is adopted in EY Solutions Hub as an accelerator. This accelerator will enable non spark users to easily define the ETL transformations using simple json or api request ========================== Worked for a Australian Online Retail Client to build a big data platform which would ingest from multiple source systems using Azure Services ( ADF, Databricks , Event Hub and Synapse Analytics ). I was able to setup the Ingestion Framework using Azure ADF and Databricks ( Preprocessing, Enrichment, Masking). I also setup entire CICD using Github actions and Terraform for Azure services involving ADF, Databricks and Synapse analytics ================================= Developed a realtime Data Quality Framework using Apache Flink, Kafka, Apache Pinot and Superset. This was a POC done to experiment real time time series DB's like Apache Pinot. Build Admin UI's using Flask and exposed API's to create DQ rules etc using FastAPI

    • Senior Analyst (Data Engineer - Big Data)
      • Oct 2019 - Sep 2021

      Worked for Australian Insurance Client to transform the way data engineering was done to a more modern cloud agnostic approach. Used Pyspark to build a standard framework with multiple reusable pip libraries created to enhance the developer usage. Used Databricks Delta lake to build efficient data lake on top on CEPH storage with capabilities like Time Travel, ACID, incremental changes etc. Used Airflow dags for the orchestration and Kubernetes for container orchestration of the… Show more Worked for Australian Insurance Client to transform the way data engineering was done to a more modern cloud agnostic approach. Used Pyspark to build a standard framework with multiple reusable pip libraries created to enhance the developer usage. Used Databricks Delta lake to build efficient data lake on top on CEPH storage with capabilities like Time Travel, ACID, incremental changes etc. Used Airflow dags for the orchestration and Kubernetes for container orchestration of the dockerized framework. I also conducted POC's to convert existing Greenplum based data warehouse development to a more structured dbt based approach. We created dbt projects for each data product area and orchestrated the entire flow with airflow.

    • India
    • IT Services and IT Consulting
    • 700 & Above Employee
    • Big Data Developer
      • Jan 2017 - Oct 2019

      ⇨ Project Description : ● Part of the Analytical Platform team who calculates base facts and perform aggregation based on study submitted by Client ● These facts are used to calculate different useful metrics which help in understanding the sales performance of particular products based on their location in shelve/rack in a particular… Show more ⇨ Project Description : ● Part of the Analytical Platform team who calculates base facts and perform aggregation based on study submitted by Client ● These facts are used to calculate different useful metrics which help in understanding the sales performance of particular products based on their location in shelve/rack in a particular store --------------------------------------------------------------------------------------- ⇨ Technologies used : ● Used Apache Hive , Apache Spark extensively for fact calculations , aggregations and feature implementations ● Used HDFS as the data storage layer --------------------------------------------------------------------------------------- ⇨ Programming Languages : ● Proficient in Core Java ● Experienced programming Spark in both Scala and Python API's (PySpark) ● Good SQL knowledge

    • System Engineer
      • Nov 2015 - Dec 2016

      Worked as Java developer in an ERP project. Also worked in Internal TCS innovation group for website development in Ruby on Rails

Education

  • Cochin University of Science and Technology
    B Tech, Electronics and Communication Engineering
    2011 - 2015
  • Nirmala Matha Central School
    High School, Maths; Computer Science; Physics; Chemistry; English
    2005 - 2010

Community

You need to have a working account to view this content. Click here to join now