Meghana MP

Data Engineer at infodat
  • Claim this Profile
Contact Information
us****@****om
(386) 825-5501
Location
Tempe, US

Topline Score

Topline score feature will be out soon.

Bio

Generated by
Topline AI

You need to have a working account to view this content.
You need to have a working account to view this content.

Experience

    • Germany
    • Information Technology & Services
    • 1 - 100 Employee
    • Data Engineer
      • Aug 2023 - Present

      Houston, Texas, United States  Extracted information from different data sources like Oracle, SQL Server, MS Access, and flat files.  Created Star schema dimension modelling for the data mart using Visio and created dimension and fact tables based on the business requirement.  Worked on Incremental load and Incremental refresh from OLTP system to OLAP data source for reporting purposes and Experience in copy activity, custom Azure Data Factory Pipeline Activities for On-cloud ETL handling.  Worked on creating… Show more  Extracted information from different data sources like Oracle, SQL Server, MS Access, and flat files.  Created Star schema dimension modelling for the data mart using Visio and created dimension and fact tables based on the business requirement.  Worked on Incremental load and Incremental refresh from OLTP system to OLAP data source for reporting purposes and Experience in copy activity, custom Azure Data Factory Pipeline Activities for On-cloud ETL handling.  Worked on creating Azure Data Factory Pipelines for moving and transforming the data.  Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, Azure Data Lake. Data Ingestion to one or more Azure Services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in In Azure Databricks.  Experience in developing Spark applications using Spark-SQL in Databricks for Analyzing& transforming the data to uncover insights into the customer usage patterns.  Worked on Cloudera CDP configuration including AMI Catalog, register CDP environment, set the id broker mapping and creating Data Lake and Datahub.  Extensive experience in understanding business requirements in the BI context and design data models to convert raw data to meaningful insights.  Extensive use of DAX queries and functions in power BI, to crunch and slice the data for deeper insights that helped in decision making.  Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards. Show less

    • United States
    • Higher Education
    • 700 & Above Employee
    • Data Engineer
      • Sep 2021 - May 2023

       Analyze, design, and build Modern data solutions using Azure PaaS service to support visualization of data. Understand current Production state of application and determine the impact of new implementation on existing business processes.  Worked on Azure PaaS Components like Azure data factory, Data Bricks, Azure logic apps, Application insights, Azure Data Lake, Azure data lake analytics, virtual machines, Zeo-Replication, and app services.  Created Pipelines in ADF using Linked… Show more  Analyze, design, and build Modern data solutions using Azure PaaS service to support visualization of data. Understand current Production state of application and determine the impact of new implementation on existing business processes.  Worked on Azure PaaS Components like Azure data factory, Data Bricks, Azure logic apps, Application insights, Azure Data Lake, Azure data lake analytics, virtual machines, Zeo-Replication, and app services.  Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.  Wrote Hive queries for data analysis to meet the business requirements. Implemented Kafka Custom encoders for custom input format to load data into Kafka Partitions. Real time streaming the data using Spark with Kafka for faster processing.  Developed Spark applications using pyspark and spark SQL for data extraction, transformation, and aggregation from multiple file formats.  Responsible for estimating the cluster size, monitoring, and troubleshooting of the Spark Data Bricks cluster.  Ability to apply the spark Data Frame API to complete Data manipulation within spark session.  Good understanding of Spark Architecture including spark core, spark SQL, Data Frame, Spark streaming, Driver Node, Worker Node, Stages, Executors and Tasks, Deployment modes, the Execution hierarchy, fault tolerance, and collection. Show less

    • Bangladesh
    • Advertising Services
    • 1 - 100 Employee
    • Data Engineer
      • Feb 2017 - Jul 2021

      • Created workflow data architecture for operational and analytical operations since migration took place from Hadoop and landed the data in AWS. • Worked on Data Mapping/ checking for Defects/ Validating/ Analysis for our tables. • Effetely part of developing designs Using Viso/Draw.io to develop the project designs for AWS migration. • Currently working on data migration from Hadoop to AWS/ Athena/ dynamo DB/ Robot 3t. • Owned and managed all changes to the data models. Created… Show more • Created workflow data architecture for operational and analytical operations since migration took place from Hadoop and landed the data in AWS. • Worked on Data Mapping/ checking for Defects/ Validating/ Analysis for our tables. • Effetely part of developing designs Using Viso/Draw.io to develop the project designs for AWS migration. • Currently working on data migration from Hadoop to AWS/ Athena/ dynamo DB/ Robot 3t. • Owned and managed all changes to the data models. Created data models, solution designs and data architecture documentation for complex information systems. • Worked with project and application teams to ensure that they understand and fully comply with data quality standards, architectural guidelines and designs. • Enforced naming standards and data dictionary for data models. • Developed data migration and data validation SQL scripts from old system to new system. • Worked on AWS architecture for implementing a completely cloud-based big data solution using S3, Lambda and Kinesis Data Streams. Show less

    • Business Consulting and Services
    • 700 & Above Employee
    • Data Analyst
      • Jun 2013 - Jan 2017

      Hyderabad, Telangana, India

Education

  • Jawaharlal Nehru Technological University
    Bachelors
    2008 - 2013

Community

You need to have a working account to view this content. Click here to join now