Krishna Chikkam

AWS Data Engineer at Sas Info
  • Claim this Profile
Contact Information
us****@****om
(386) 825-5501
Location
Chicago, Illinois, United States, US
Languages
  • English Full professional proficiency
  • Telugu Native or bilingual proficiency
  • Hindi Full professional proficiency
  • Spanish Limited working proficiency

Topline Score

Topline score feature will be out soon.

Bio

Generated by
Topline AI

You need to have a working account to view this content.
You need to have a working account to view this content.

Credentials

  • Oracle Database SQL Certified Associate
    Oracle
    Jun, 2019
    - Nov, 2024
  • Amazon Web Services (AWS)
    Capgemini
    May, 2019
    - Nov, 2024
  • Big Data
    Capgemini
    May, 2019
    - Nov, 2024
  • Ab Initio Certified
    Capgemini
    Apr, 2019
    - Nov, 2024
  • Data Modeling Certified
    Capgemini
    Apr, 2019
    - Nov, 2024
  • Qlik Sense Certified
    Capgemini
    Apr, 2019
    - Nov, 2024
  • Qlik View Certified
    Capgemini
    Apr, 2019
    - Nov, 2024
  • Teradata Basics Certified
    Capgemini
    Apr, 2019
    - Nov, 2024
  • ETL Basics Certified
    Capgemini
    Mar, 2019
    - Nov, 2024
  • Java Programming Basics
    Capgemini
    Mar, 2019
    - Nov, 2024
  • Oracle Advanced Certified
    Capgemini
    Mar, 2019
    - Nov, 2024
  • UNIX Certified
    Capgemini
    Mar, 2019
    - Nov, 2024
  • Introduction to Python Certified
    Capgemini
    Feb, 2019
    - Nov, 2024

Experience

    • Brazil
    • Retail
    • 1 - 100 Employee
    • AWS Data Engineer
      • Jan 2023 - Present

      Significantly contributed to the development of a key data pipeline to process over 500 TB of data by consolidating data from diverse sources into a single destination, enabling quick data analysis and better decision making. Processed transactional data across 9 diverse primary data sources using Apache Spark, Redshift, S3, and Python. Collaborated with cross-functional teams to migrate on-prem databases to AWS, reducing maintenance costs by 20%. Automated three CI/CD pipelines using Git hooks, ensuring smooth code integration and version control. Show less

    • Japan
    • Financial Services
    • 700 & Above Employee
    • Data Engineer
      • Feb 2019 - Nov 2021

      Developed a data pipeline using AWS Glue, Python and Apache Spark to automate data ingestion, transformation, and loading to process over 10TB of data per month, reducing manual efforts and operating costs. Engineered an approach to manage loan moratorium for 8 million customers during covid-19 using Python and SQL. Architected, deployed and supported a highly scalable data warehousing solution on the Snowflake platform, managing over 500 concurrent users and delivering a significant improvement in application performance. Implemented partitioning, bucketing and index optimization in Hive (Hadoop), reducing query response time by 25%. Streamlined data workflows by seamlessly incorporating automation and scheduling techniques within Apache Airflow, resulting in a reduction in manual intervention and increased data accessibility. Pioneered Bash shell scripting for task automation, excelled in processing and analyzing JSON data, achieved optimizing data storage through the conversion of 50 terabytes to Parquet format. Show less

    • ETL/Informatica Developer
      • Jul 2018 - Feb 2019

      Led the development of ETL Informatica applications in an agile environment, cutting development costs by 25%. Developed and delivered tableau reports and dashboards for business users, resulting in increased productivity. Spearheaded performance tuning techniques in an existing ETL process by implementing parallel processing and optimizing data loading strategy that improved the application performance by at least 30%. Orchestrated migration of applications from Informatica PowerCenter to Informatica Intelligent Cloud Services (IICS). Show less

    • Informatica Developer
      • Jan 2018 - May 2018

      Improved database performance by 30% through Strategic query optimization, partitioning and indexing. Extracted and transformed data from multiple Databases (PostgreSQL, MS SQL server, Teradata, Oracle), SharePoint, flat files, XML files, into target tables in databases and csv files. Executed Disaster Recovery (DR) plan for over 10 applications, ensuring business continuity with minimal downtime. Improved database performance by 30% through Strategic query optimization, partitioning and indexing. Extracted and transformed data from multiple Databases (PostgreSQL, MS SQL server, Teradata, Oracle), SharePoint, flat files, XML files, into target tables in databases and csv files. Executed Disaster Recovery (DR) plan for over 10 applications, ensuring business continuity with minimal downtime.

Education

  • Governors State University
    Master of Science - MS, Computer Science
    2022 - 2023
  • MLR Institute of Technology
    Bachelor of Technology, Computer Science
    2014 - 2018

Community

You need to have a working account to view this content. Click here to join now