Jagan Nalla

Senior Data Engineer at LiveAction Software
  • Claim this Profile
Contact Information
Location
Jacksonville, Florida, United States, US
Languages
  • English Professional working proficiency

Topline Score

Topline score feature will be out soon.

Bio

Generated by
Topline AI

5.0

/5.0
/ Based on 2 ratings
  • (2)
  • (0)
  • (0)
  • (0)
  • (0)

Filter reviews by:

Jigyasa Kohli

Jagan was a Senior to me during my work at Icube. Jagan is a very hard-working person who can tackle any BI problem with ease. We have worked together on several projects, and I found him a highly skilled and dedicated professional. His expertise in the field of BI has helped our company immensely. I strongly recommend working with Jagan. He was an amazing mentor

Alexander Small

Jagan is an incredible data engineer and problem solver. I would often ask him for his input or how he would navigate a specific issue and each time he would have a great answer or opinion. I relied on Jagan in many situations and that’s only because I trusted him to deliver. He has taught me a lot and has earned my respect ten fold. I hope to work with Jagan again very soon.

You need to have a working account to view this content.
You need to have a working account to view this content.

Credentials

  • Qlikview 11 Certified Designer
    Qlik
    Jul, 2014
    - Sep, 2024
  • Qlikview 11 Certified Developer
    Qlik
    Jul, 2014
    - Sep, 2024

Experience

    • United States
    • Software Development
    • 100 - 200 Employee
    • Senior Data Engineer
      • Jul 2022 - Present

      • Developed CloudETL pipelines using Python. Set up the infrastructure or environment to develop the use cases locally using Docker and run the pipelines locally before deploying into Cloud. • Developed Apache Flink applications using PyFink API for real time streaming and batch processing. • Using PyFlink (is a python API for Apache Flink) build scalable batch and streaming workloads applications, such as real-time data processing pipelines, large-scale exploratory data analysis, and ETL… Show more • Developed CloudETL pipelines using Python. Set up the infrastructure or environment to develop the use cases locally using Docker and run the pipelines locally before deploying into Cloud. • Developed Apache Flink applications using PyFink API for real time streaming and batch processing. • Using PyFlink (is a python API for Apache Flink) build scalable batch and streaming workloads applications, such as real-time data processing pipelines, large-scale exploratory data analysis, and ETL processes. • Worked with two different APIs that are available in Apache Flink: 1. PyFlink Table API: Used to write powerful relational queries in a way that is similar to using SQL or working with tabular data in Python. 2. PyFlink DataStream API: Used to write lower-level control over the core building blocks of Flink, state and time, to build more complex stream processing use cases. • Deployed CloudETL and Apache Flink applications in Apache EKS with CI/CD jobs from GitLab. • Worked on AWS Cloud Platform like S3, RDS, Glue, AWS Athena. Show less • Developed CloudETL pipelines using Python. Set up the infrastructure or environment to develop the use cases locally using Docker and run the pipelines locally before deploying into Cloud. • Developed Apache Flink applications using PyFink API for real time streaming and batch processing. • Using PyFlink (is a python API for Apache Flink) build scalable batch and streaming workloads applications, such as real-time data processing pipelines, large-scale exploratory data analysis, and ETL… Show more • Developed CloudETL pipelines using Python. Set up the infrastructure or environment to develop the use cases locally using Docker and run the pipelines locally before deploying into Cloud. • Developed Apache Flink applications using PyFink API for real time streaming and batch processing. • Using PyFlink (is a python API for Apache Flink) build scalable batch and streaming workloads applications, such as real-time data processing pipelines, large-scale exploratory data analysis, and ETL processes. • Worked with two different APIs that are available in Apache Flink: 1. PyFlink Table API: Used to write powerful relational queries in a way that is similar to using SQL or working with tabular data in Python. 2. PyFlink DataStream API: Used to write lower-level control over the core building blocks of Flink, state and time, to build more complex stream processing use cases. • Deployed CloudETL and Apache Flink applications in Apache EKS with CI/CD jobs from GitLab. • Worked on AWS Cloud Platform like S3, RDS, Glue, AWS Athena. Show less

    • United States
    • Software Development
    • 1 - 100 Employee
    • Senior Data Engineer | Developer
      • Jan 2016 - Jul 2022

      • Worked on Microsoft Azure Cloud Platform (Azure Databricks, Azure Storage Account, Azure Data Factory, Azure Synapse and Azure SQL). • Worked on AWS Cloud Platform (S3, Redshift, Glue, AWS Athena). • Good experience on Hortonworks and Cloudera Hadoop Distributions to develop massive and robust data pipelines. Worked on Data Ingestion, Data Transformation and Data Attribution to build Hybrid Data Lake to answer valuable business questions. • Experience in designing and implementing… Show more • Worked on Microsoft Azure Cloud Platform (Azure Databricks, Azure Storage Account, Azure Data Factory, Azure Synapse and Azure SQL). • Worked on AWS Cloud Platform (S3, Redshift, Glue, AWS Athena). • Good experience on Hortonworks and Cloudera Hadoop Distributions to develop massive and robust data pipelines. Worked on Data Ingestion, Data Transformation and Data Attribution to build Hybrid Data Lake to answer valuable business questions. • Experience in designing and implementing complete end-to-end Hadoop solutions using, HIVE, SQOOP, Oozie, Spark, HBase, KUDU and Zookeeper. • Working experience of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts. • Experience in analyzing data using Hive QL, Impala/Kudu SQL and using custom SPARK SQL. • Experience in importing and exporting data using Sqoop from relational database systems like SQL Server to HDFS for further analysis. • Designing, developing, constructing, installing, testing, and maintaining the complete data management & processing systems Dat. • Discovery, analytics and visualization, enterprise application integration, dimensional and predictive modeling.

    • Senior Software Engineer
      • Feb 2013 - May 2016

      • Responsible for design development, deployment, and maintenance of QlikView and Tableau solutions. • Designed and developed ETL flows using SSIS to convert the raw data in to structured data. • Created BI reports using QlikView and Tableau tools. • To optimize the application, Implemented the Star Schema model and improved the performance of dashboard loads. • Deployed the application in QlikView server through QlikView Management Console and maintained the QlikView… Show more • Responsible for design development, deployment, and maintenance of QlikView and Tableau solutions. • Designed and developed ETL flows using SSIS to convert the raw data in to structured data. • Created BI reports using QlikView and Tableau tools. • To optimize the application, Implemented the Star Schema model and improved the performance of dashboard loads. • Deployed the application in QlikView server through QlikView Management Console and maintained the QlikView server. • Created MS SQL & Windows scheduled jobs and providing support for maintaining those jobs. • Worked on company product Intuceo, this product helps to solve Business Analytics using machine algorithms.

    • Software Engineer
      • Jan 2011 - Jan 2013

      • Optimized the SQL queries by creating Cluster Index and Non-Clustered Indexes. • Developed extraction layer code to tie multiple data sources (Excel, Text, CSV, Relational databases) together under one unified architecture. • Developed different types of dashboards in UI and test the application to match with source data.

Education

  • Kakatiya University (University College of Pharmaceutical Sciences)
    Master of Computer Applications - MCA, Computer Engineering
    2007 - 2010

Community

You need to have a working account to view this content. Click here to join now