Dharmendra Mishra

Head Data Engineer at Saadia Group
  • Claim this Profile
Contact Information
Location
Bengaluru, Karnataka, India, IN

Topline Score

Bio

Generated by
Topline AI

5.0

/5.0
/ Based on 11 ratings
  • (11)
  • (0)
  • (0)
  • (0)
  • (0)

Filter reviews by:

You need to have a working account to view this content. Click here to join now
Alex Rabinovich

Dharmendra has shown himself to be very well-skilled and dedicated person. He has a wide range of knowledge in Google Cloud, Terraform, DevOps, Python, SQL and BI. Dharmendra performs very well on his own and at the same time is an invaluable team player.

Anshul Singhal

I had the pleasure of working with Dharmendra and was constantly amazed by his technical expertise and analytical skills. Dharmendra is a rare combination of technical know-how and business acumen, making him a valuable asset in any data engineering role. He excels in data warehousing, big data technologies, and data management, and has a strong understanding of best practices. Dharmendra also has excellent communication and leadership abilities, making him a great team player. I highly recommend Dharmendra for any data engineering role.

Pawan Kumar

Have worked with DM for about one year at Accenture for one of the challenging assignment. He is a great guy to work with and is down to earth and as a Data Engineer he comes with multiple technical solutions to solve one problem at a time. He has strong knowledge in GCP and has great expertise in scripting language like python, Shell Script etc. Overall he is a great package for any Organization if they are looking for some challenging Data Engineering Solution Architect !!

0

/5.0
/ Based on 0 ratings
  • (0)
  • (0)
  • (0)
  • (0)
  • (0)

Filter reviews by:

No reviews to display There are currently no reviews available.
You need to have a working account to view this content. Click here to join now

Credentials

  • Introduction to Generative AI
    Google
    Sep, 2023
    - Sep, 2024
  • AZ-400: Designing and Implementing Microsoft DevOps Solutions
    Microsoft
    Jun, 2022
    - Sep, 2024
  • Google cloud certified professional data engineering
    Google
    Jun, 2021
    - Sep, 2024
  • Google Tag Manager fundamental
    Google
    Dec, 2019
    - Sep, 2024
  • Google analytics
    Google
    Nov, 2019
    - Sep, 2024
  • Google data studio
    Google
    Nov, 2019
    - Sep, 2024
  • Looker
    Google
    Nov, 2020
    - Sep, 2024

Experience

    • United States
    • Retail
    • 100 - 200 Employee
    • Head Data Engineer
      • Jan 2023 - Present

      1. Collaborate with stakeholders to define cloud requirements and design scalable and cost-effective GCP solutions. 2. Lead the planning and implementation of GCP infrastructure, considering best practices for security, performance, and reliability. 3. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of cloud resources. 4. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of… Show more 1. Collaborate with stakeholders to define cloud requirements and design scalable and cost-effective GCP solutions. 2. Lead the planning and implementation of GCP infrastructure, considering best practices for security, performance, and reliability. 3. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of cloud resources. 4. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of cloud resources. 5.Develop cloud architecture blueprints, ensuring alignment with business goals and technical requirements. 6. Implement security best practices and compliance requirements within the GCP environment. 7. Implement the data pipeline using different services like BigQuery, dataflow, composer, cloud functions and storage. Show less 1. Collaborate with stakeholders to define cloud requirements and design scalable and cost-effective GCP solutions. 2. Lead the planning and implementation of GCP infrastructure, considering best practices for security, performance, and reliability. 3. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of cloud resources. 4. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of… Show more 1. Collaborate with stakeholders to define cloud requirements and design scalable and cost-effective GCP solutions. 2. Lead the planning and implementation of GCP infrastructure, considering best practices for security, performance, and reliability. 3. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of cloud resources. 4. Design and implement infrastructure as code (IAC) using tools like Terraform to automate the provisioning of cloud resources. 5.Develop cloud architecture blueprints, ensuring alignment with business goals and technical requirements. 6. Implement security best practices and compliance requirements within the GCP environment. 7. Implement the data pipeline using different services like BigQuery, dataflow, composer, cloud functions and storage. Show less

    • Ireland
    • Business Consulting and Services
    • 700 & Above Employee
    • Analytics Associate Manager
      • Mar 2022 - Dec 2022

      Role Played: Associate Manager (Data Engineering) Environment: Google Cloud Platform (App Engine, Big Query, Storage, Pub Sub, Cloud Composer, Cloud Data Flow), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by JLR to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data sources including their own marketing entity platform data on S3 Bucket. JLR product rules… Show more Role Played: Associate Manager (Data Engineering) Environment: Google Cloud Platform (App Engine, Big Query, Storage, Pub Sub, Cloud Composer, Cloud Data Flow), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by JLR to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data sources including their own marketing entity platform data on S3 Bucket. JLR product rules database and user activity (Contract Event Log) from SFTP server co-located in a big query. 4 Step Approach for this ETL Pipeline: 5. Pull data from RDBMS, S3 Bucket & internal SFTP and put them into a raw GCS bucket. 6. Create a Data Flow job to process the above data and place them into a curated GCS bucket. 7. Load the data into google Big Query. 8. Create a Flask Application for reporting and billing to further use that data by account managers Show less Role Played: Associate Manager (Data Engineering) Environment: Google Cloud Platform (App Engine, Big Query, Storage, Pub Sub, Cloud Composer, Cloud Data Flow), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by JLR to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data sources including their own marketing entity platform data on S3 Bucket. JLR product rules… Show more Role Played: Associate Manager (Data Engineering) Environment: Google Cloud Platform (App Engine, Big Query, Storage, Pub Sub, Cloud Composer, Cloud Data Flow), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by JLR to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data sources including their own marketing entity platform data on S3 Bucket. JLR product rules database and user activity (Contract Event Log) from SFTP server co-located in a big query. 4 Step Approach for this ETL Pipeline: 5. Pull data from RDBMS, S3 Bucket & internal SFTP and put them into a raw GCS bucket. 6. Create a Data Flow job to process the above data and place them into a curated GCS bucket. 7. Load the data into google Big Query. 8. Create a Flask Application for reporting and billing to further use that data by account managers Show less

    • Entertainment Providers
    • 1 - 100 Employee
    • Associate Manager
      • Nov 2021 - Mar 2022

      Role Played: Associate Manager(Azure Cloud & Python Developer) Environment: Azure DevOps, Azure Functions , Azure Blob Storage, Azure Data Bricks, Python 3 Step: 1. Check the health of code already written in python integrated with Azure DevOps. 2. Bug Fix and Integrate the logger. 3. Gather the new requirement and update the code by following standard practice in Azure DevOps. Role Played: Associate Manager(Azure Cloud & Python Developer) Environment: Azure DevOps, Azure Functions , Azure Blob Storage, Azure Data Bricks, Python 3 Step: 1. Check the health of code already written in python integrated with Azure DevOps. 2. Bug Fix and Integrate the logger. 3. Gather the new requirement and update the code by following standard practice in Azure DevOps.

    • United States
    • Business Consulting and Services
    • 700 & Above Employee
    • Marketing Strategist and Analyst
      • Jun 2019 - Nov 2021

      Role Played: Data Engineer Environment: Google Cloud Platform (Big Query, Storage, Cloud Computing, Cloud Functions, Pub Sub, Scheduler, Cloud Run, Data Flow, Cloud Composer, Data Studio), Python, Shell Script Project Description: This is the project where I need to create everything from scratch.Create ETL pipeline and ingest data from multiple platforms to big Query (data warehouse). This project involves various GCP services. Getting data from multiple sources and load into… Show more Role Played: Data Engineer Environment: Google Cloud Platform (Big Query, Storage, Cloud Computing, Cloud Functions, Pub Sub, Scheduler, Cloud Run, Data Flow, Cloud Composer, Data Studio), Python, Shell Script Project Description: This is the project where I need to create everything from scratch.Create ETL pipeline and ingest data from multiple platforms to big Query (data warehouse). This project involves various GCP services. Getting data from multiple sources and load into different GCS buckets in the form of AVRO and restructure them and save into another flat csv files. Trigger the jobs written in cloud functions as soon as the data loaded into GCS and load into data ware house. 4 Step Approach for this ETL Pipeline: 1. Written different API scripts in python and scheduled the jobs using compute engine. 2. Extract data from multiple platforms and dump into google cloud storage bucket. 3. Cloud functions to trigger data flow Job to clean up data and load into Big Query. 4. Visualize the data using Data Studio, Power BI and Tableau. Show less Role Played: Data Engineer Environment: Google Cloud Platform (Big Query, Storage, Cloud Computing, Cloud Functions, Pub Sub, Scheduler, Cloud Run, Data Flow, Cloud Composer, Data Studio), Python, Shell Script Project Description: This is the project where I need to create everything from scratch.Create ETL pipeline and ingest data from multiple platforms to big Query (data warehouse). This project involves various GCP services. Getting data from multiple sources and load into… Show more Role Played: Data Engineer Environment: Google Cloud Platform (Big Query, Storage, Cloud Computing, Cloud Functions, Pub Sub, Scheduler, Cloud Run, Data Flow, Cloud Composer, Data Studio), Python, Shell Script Project Description: This is the project where I need to create everything from scratch.Create ETL pipeline and ingest data from multiple platforms to big Query (data warehouse). This project involves various GCP services. Getting data from multiple sources and load into different GCS buckets in the form of AVRO and restructure them and save into another flat csv files. Trigger the jobs written in cloud functions as soon as the data loaded into GCS and load into data ware house. 4 Step Approach for this ETL Pipeline: 1. Written different API scripts in python and scheduled the jobs using compute engine. 2. Extract data from multiple platforms and dump into google cloud storage bucket. 3. Cloud functions to trigger data flow Job to clean up data and load into Big Query. 4. Visualize the data using Data Studio, Power BI and Tableau. Show less

    • United States
    • Advertising Services
    • 100 - 200 Employee
    • Analytics Engineer
      • Sep 2018 - Jun 2019

      Role Played: Developer Environment: Google Cloud Platform (App Engine, Big Query, Storage), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by SAGE to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data source including their own marketing entity platform data on S3 Bucket. SAGE product rules database and user activity (Contract Event Log) from SFTP server… Show more Role Played: Developer Environment: Google Cloud Platform (App Engine, Big Query, Storage), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by SAGE to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data source including their own marketing entity platform data on S3 Bucket. SAGE product rules database and user activity (Contract Event Log) from SFTP server co-located in big query. 4 Step Approach for this ETL Pipeline: 1. Pull data from RDBMS, S3 Bucket & internal SFTP and put them into raw GCS bucket. 2. Create Data Flow job to process above data and place them into curate GCS bucket. 3. Load the data into google Big Query. 4. Create a Flask Application for reporting and billing to further use that data by account managers. Show less Role Played: Developer Environment: Google Cloud Platform (App Engine, Big Query, Storage), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by SAGE to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data source including their own marketing entity platform data on S3 Bucket. SAGE product rules database and user activity (Contract Event Log) from SFTP server… Show more Role Played: Developer Environment: Google Cloud Platform (App Engine, Big Query, Storage), Python, Flask, Jira, Shell Script Project Description: The data warehouse was created by SAGE to facilitate their data science team to work on multiple account management use cases. Data is being pulled from multiple diverse data source including their own marketing entity platform data on S3 Bucket. SAGE product rules database and user activity (Contract Event Log) from SFTP server co-located in big query. 4 Step Approach for this ETL Pipeline: 1. Pull data from RDBMS, S3 Bucket & internal SFTP and put them into raw GCS bucket. 2. Create Data Flow job to process above data and place them into curate GCS bucket. 3. Load the data into google Big Query. 4. Create a Flask Application for reporting and billing to further use that data by account managers. Show less

    • Argentina
    • Business Consulting and Services
    • 1 - 100 Employee
    • Business Intelligence Team Lead
      • Aug 2015 - Sep 2018

      Automate manual process using python, and design tools which generates Reports/dashboards including charts/tables. Automate task in excel using excel advance functions. Lead & mentoring the team of analysts and provide global support in different time zone. Mentor team members and manage work allocation as well as track report delivery globally. Understand digital media industry trends, and develop in-depth analysis and vertical-based. Reporting solutions at… Show more Automate manual process using python, and design tools which generates Reports/dashboards including charts/tables. Automate task in excel using excel advance functions. Lead & mentoring the team of analysts and provide global support in different time zone. Mentor team members and manage work allocation as well as track report delivery globally. Understand digital media industry trends, and develop in-depth analysis and vertical-based. Reporting solutions at advertiser & publisher level. Writing complex SQL queries. Working with different team to get their requirements and develop reports using excel. Provide direction to development team, and enforce best standard and practice.

    • Sr Business Analyst
      • May 2014 - Oct 2015

    • Business Analyst
      • May 2013 - Apr 2014

    • Data Analyst
      • Feb 2011 - Apr 2013

Education

  • Rajiv Gandhi Prodyogiki Vishwavidyalaya
    Bachelor of Engineering (B.E.), Computer Science
    2005 - 2009
  • M.P. Board
    Associate's Degree
    2004 - 2005
  • M.P. Board
    High School
    2002 - 2003

Community

You need to have a working account to view this content. Click here to join now