Sri lekha

Data Engineer at CrowdStrike
  • Claim this Profile
Contact Information
Location
Denton, Texas, Vereinigte Staaten von Amerika

Topline Score

Bio

Generated by
Topline AI

0

/5.0
/ Based on 0 ratings
  • (0)
  • (0)
  • (0)
  • (0)
  • (0)

Filter reviews by:

No reviews to display There are currently no reviews available.

0

/5.0
/ Based on 0 ratings
  • (0)
  • (0)
  • (0)
  • (0)
  • (0)

Filter reviews by:

No reviews to display There are currently no reviews available.
You need to have a working account to view this content. Click here to join now

Experience

    • United States
    • Computer and Network Security
    • 700 & Above Employee
    • Data Engineer
      • Dez. 2020 - –Heute

      • Designed the business requirement collection approach based on the project scope and SDLC methodology. • Installing, configuring and maintaining Data Pipelines • Creating Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards. • Files extracted from Hadoop and dropped on daily hourly basis into S3. • Working with Data governance and Data quality to design various models and processes. • Involved in all the steps and scope of the project reference data approach to MDM, have created a Data Dictionary and Mapping from Sources to the Target in MDM Data Model. • Developed Automation Regressing Scripts for validation of ETL process between multiple databases like AWS Redshift, Oracle, Mongo DB, T-SQL, and SQL Server using Python. • Automated the data processing with Oozie to automate data loading into the Hadoop Distributed File System. • Conducted ETL Data Integration, Cleansing, and Transformations using AWS glue Spark script. • Design and implement large scale distributed solutions in AWS and GCP clouds. • Designing and Developing Oracle PL/SQL and Shell Scripts, Data Import/Export, Data Conversions and Data Cleansing. • Performing data analysis, statistical analysis, generated reports, listings and graphs using SAS tools, SAS/Graph, SAS/SQL, SAS/Connect and SAS/Access. • Developing Spark applications using Scala and Spark-SQL for data extraction, transformation and aggregation from multiple file formats. • Using Kafka and integrating with the Spark Streaming Weniger anzeigen

    • Australia
    • Renewable Energy Semiconductor Manufacturing
    • AWS Data Engineer
      • Juni 2018 - Nov. 2020

      • Extensively used Agile methodology as the Organization Standard to implement the data Models • Created several types of data visualizations using Python and Tableau. • Extracted Mega Data from AWS using SQL Queries to create reports. • Performed reverse engineering using Erwin to redefine entities, attributes and relationships existing database. • Analyzed functional and non-functional business requirements and translate into technical data requirements and create or update existing logical and physical data models. • Developed a data pipeline using Kafka to store data into HDFS. • Designed and developed architecture for data services ecosystem spanning Relational, NoSQL, and Big Data technologies. • Performed Regression testing for Golden Test Cases from State (end to end test cases) and automated the process using python scripts. • Developed Spark jobs using Scala for faster real-time analytics and used Spark SQL for querying • Used SQL Server Integrations Services (SSIS) for extraction, transformation, and loading data into target system from multiple sources • docker Primarily Responsible for converting Manual Report system to fully automated CI/CD Data Pipeline that ingest data from different Marketing platform to AWS S3 data lake. • Utilized AWS services with focus on big data analytics, enterprise data warehouse and business intelligence solutions to ensure optimal architecture, scalability, flexibility Weniger anzeigen

    • United States
    • Software Development
    • 700 & Above Employee
    • Data Engineer
      • Dez. 2015 - Mai 2018

      • Gathered business requirements, definition and design of the data sourcing, worked with the data warehouse architect on the development of logical data models. • Created sophisticated visualizations, calculated columns and custom expressions and developed Map Chart, Cross table, Bar chart, Tree map and complex reports which involves Property Controls, Custom Expressions. • Investigated market sizing, competitive analysis and positioning for product feasibility. Worked on Business forecasting, segmentation analysis and Data mining. • Automated Diagnosis of Blood Loss during Emergencies and developed Machine Learning algorithm to diagnose blood loss. • Extensively used Agile methodology as the Organization Standard to implement the data Models. Used Micro service architecture with Spring Boot based services interacting through a combination of REST and Apache Kafka message brokers. • Created several types of data visualizations using Python and Tableau. Extracted Mega Data from AWS using SQL Queries to create reports. • Performed reverse engineering using Erwin to redefine entities, attributes, and relationships existing database. • Analyzed functional and non-functional business requirements and translate into technical data requirements and create or update existing logical and physical data models. Developed a data pipeline using Kafka to store data into HDFS. • Performed Regression testing for Golden Test Cases from State (end to end test cases) and automated the process using python scripts. • Developed Spark jobs using Scala for faster real-time analytics and used Spark SQL for querying Weniger anzeigen

    • Data Analyst
      • Juni 2013 - Sept. 2015

      • Participated in testing of procedures and Data utilizing, PL/SQL to ensure integrity and quality of Data in Data warehouse. • Gathered Data from Help Desk Ticketing System and write ad-hoc reports and, charts and graphs for analysis. • Worked to ensure high levels of Data consistency between diverse source systems including flat files, XML and SQL Database. • Developed and run ad-hoc Data queries from multiple database types to identify system of records, Data inconsistencies, and Data quality issues. • Developed complex SQL statements to extract the Data and packaging/encrypting Data for delivery to customers. • Provided business intelligence analysis to decision-makers using an interactive OLAP tool • Created T/SQL statements (select, insert, update, delete) and stored procedures. • Defined Data requirements and elements used in XML transactions. • Created Informatica mappings using various Transformations like Joiner, Aggregate, Expression, Filter and Update Strategy. • Performed Tableau administering by using tableau admin commands. • Involved in defining the source to target Data mappings, business rules and Data definitions. • Ensured the compliance of the extracts to the Data Quality Center initiatives • Metrics reporting, Data mining and trends in helpdesk environment using Access • Worked on SQL Server Integration Services (SSIS) to integrate and analyze data from multiple heterogeneous information sources. • Built reports and report models using SSRS to enable end user report builder usage. • Created Excel charts and pivot tables for the Ad-hoc Data pull. • Created Column Store indexes on dimension and fact tables in the OLTP database to enhance read operation. Weniger anzeigen

Community

You need to have a working account to view this content. Click here to join now