Bharadwaj Thallavajjala

Software Engineer II at Next Phase Solutions and Services, Inc.
  • Claim this Profile
Contact Information
us****@****om
(386) 825-5501
Location
Gwynn Oak, Maryland, United States, US

Topline Score

Topline score feature will be out soon.

Bio

Generated by
Topline AI

You need to have a working account to view this content.
You need to have a working account to view this content.

Credentials

  • SP840: Managed Delta Lake (AWS Databricks) Apache spark
    Databricks
    Nov, 2019
    - Nov, 2024
  • Azure Databricks Essential Training
    LinkedIn
    Oct, 2019
    - Nov, 2024
  • Apache Spark Essential Training
    LinkedIn
    Sep, 2019
    - Nov, 2024

Experience

    • United States
    • IT Services and IT Consulting
    • 1 - 100 Employee
    • Software Engineer II
      • Sep 2020 - Present

      ->Worked on designing, developing and implementing custom software and database application capabilities for a variety of legacy and modernized systems with limited oversight.-> Involved in leads discussions across cross-functional teams, engaging senior management to ensure consistency in solution development and implementation.-> Technical experience and knowledge, with depth in the following: Databricks and Qubole.-> Developed and maintained scalable data pipelines and builds out new integrations to support continuing increases in data volume and complexity for different sources like TMSIS, NPPES, QIES, and PECOS .-> Experience with Pyspark and SQL.-> Extensively write SQL Queries (Sub queries, correlated sub queries and Join conditions) for Data Quality, Data Accuracy, Data Analysis and Data Extraction need on complex source systems and MDM system.-> Responsible for running the highly complex ad-hoc SQL queries to satisfy business user’s data needs and resolve their questions regarding source and MDM data product.-> Implement database design and enhancements for eMDM 3.0 using SQL, Python, Rest API’s etc.

    • Software Engineer I
      • Jul 2018 - Sep 2020

      -> Primarily MDM (Master Data Management) represents the business objects that contain the most valuable, agreed upon information shared across an organization.-> It covers relatively static reference data, transactional, unstructured, analytical, hierarchical and metadata.-> Design and develop one single source of beneficiary and provider information across the agency.-> MDM is an enterprise resource that resides in the CMS data layer of the architecture. -> MDM allows CMS to reduce costs, streamline operations, and better manage administrative activities. -> EDSC contract develops and maintains enterprise services that offer information from MDM system to be available to its authoritative Centers for Medicare and Medicaid Services (CMS) clients.-> Improve and ensure the data quality throughout the data integration processes.

    • United States
    • IT Services and IT Consulting
    • 100 - 200 Employee
    • Data Stage Developer
      • Sep 2017 - Jun 2018

      -> Prepared documentation for addressing the referential integrity relations in between the tables at ETL level.-> Prepared the validation SQL's equivalent to Informatica mappings to make sure the application is working fine. -> Extensively used Informatica Transformation like Source Qualifier, Rank, SQL, Router, Filter, Lookup, Joiner, Aggregator, Normalizer, Sorter etc. and all transformation properties.-> Interact with business analysts and end users to understand requirements and convert business specifications to technical specifications Developed and implemented the coding of Informatica Mapping for the different stages of ETL. -> Involved in performance tuning of the ETL process and performed the data warehouse testing.

    • Jr. Data Stage Developer
      • Feb 2017 - Aug 2017

      -> Created Transformations using the SQL script to modify the data before loading into tables.-> Used Informatica Power center to load the data into data warehouse.-> Constructed an extended entity relationship diagram based on a narrative description of a business scenario.-> Worked on sfdc session log error files to look into the errors and debug the issue. -> Created Transformations using the SQL script to modify the data before loading into tables.-> Used Informatica Power center to load the data into data warehouse.-> Constructed an extended entity relationship diagram based on a narrative description of a business scenario.-> Worked on sfdc session log error files to look into the errors and debug the issue.

    • United States
    • Higher Education
    • 400 - 500 Employee
    • Teaching Assistant
      • Dec 2015 - Dec 2016

      Worked in SQL, Python libraries including numpy, scipy, pandas, and sci-kit learn.Machine Learning algorithms such as naïve Bayes, SVM’s, decision trees, KNN and K-means clustering.Data Engineering: Hadoop, Spark, Scala, HBase, and AWS or Google cloud. Worked in SQL, Python libraries including numpy, scipy, pandas, and sci-kit learn.Machine Learning algorithms such as naïve Bayes, SVM’s, decision trees, KNN and K-means clustering.Data Engineering: Hadoop, Spark, Scala, HBase, and AWS or Google cloud.

Education

  • Oklahoma City University
    Master's degree, Computer Science
    2015 - 2016
  • Oklahoma City University
    Master's degree, Computer Science
    2015 - 2016

Community

You need to have a working account to view this content. Click here to join now