satish pujari
Software Engineer at Trinity Mobility- Claim this Profile
Click to upgrade to our gold package
for the full feature experience.
Topline Score
Bio
0
/5.0 / Based on 0 ratingsFilter reviews by:
Experience
-
Trinity Mobility
-
India
-
Software Development
-
200 - 300 Employee
-
Software Engineer
-
Jun 2018 - Present
Having Good knowledge of Microsoft Azure App services, DNS, Databases Virtual machines Etc..,. Set up a Multi-node Hadoop cluster setup on the Linux system. Import and export data into HDFS using Sqoop. Perform Query Optimization through Hive. Migration of Hive data into HBase. Involved in writing the Hive, Pig scripts to reduce the job execution time. Handled importing of data from various data sources, performed transformations using Hive, Pig Map Reduce, loaded data into HDFS. Wrote Hive Queries for analyzing data in Hive warehouse using Hive Query Language (HQL). Performed Hive and Hbase integration, Pig and Hbase Integration Implemented Partitioning, Dynamic Partitioning, and Bucketing in Hive. Built reusable Hive UDF libraries for business requirements, which enabled users to use these UDF's in Hive Querying. Designing, creating Hive external tables using meta-store & performed analysis on data by doing transformation using JOINS, built-in functions on Hive tables. Performed Data Analytics using SQL. In-depth understanding and working knowledge of Spark architecture including spark core(RDDs) ,Spark Structured APIs(dataframes,datasets),SparkSQL using Python. Show less
-
-
-
IBM
-
IT Services and IT Consulting
-
1 - 100 Employee
-
Big Data Engineer
-
Jun 2016 - Mar 2018
Understand business requirements Work with analyst to finalize the requirements Developed java spring batch module and jobs which will pull data from data source databases residing client network Test the functionality of the application on local environment. Generate Avro schema for all tables in source database to hold the data Develop Hadoop based programs to store and process the data came from Kafka Write Apache Hive scripts on top of Hadoop based data Conduct load test, local test functional test of developed application with help of infra team. Create Jenkins jobs before doing the load test and functional tests etc,. Use Camus for sending data to soft layer Kafka to Hadoop. Do the functional test in lower environment like dev and load test etc,. Perform Camus throughput analysis and create report based on it. Having knowledge on Splunk, and worked on Linux commands in Putty, Hive phoenix commands in Edge-node. Show less
-
-
Education
-
Jawaharlal Nehru Technological University, Anantapur
Bachelor of Technology (B.Tech.)