Anil Gupta
Data Architect at TrueCar, Inc.- Claim this Profile
Click to upgrade to our gold package
for the full feature experience.
-
Hindi -
-
English -
Topline Score
Bio
Credentials
-
Sun Certified Java Programmer 1.5
Sun Microsystems
Experience
-
TrueCar, Inc.
-
United States
-
Technology, Information and Internet
-
300 - 400 Employee
-
Data Architect
-
Apr 2017 - Present
-
-
Technical Lead
-
May 2016 - Apr 2017
-
-
Senior Software Developer
-
Sep 2014 - May 2016
Building backend of car buying platform using BigData stack.
-
-
-
Intuit
-
United States
-
Software Development
-
700 & Above Employee
-
Senior Software Engineer
-
Nov 2011 - Sep 2014
• Responsible for bootstrapping BigData project in my Business Unit. Conduct POC’s in Hadoop/HBase. • Implemented Oracle like “Order by” sorting feature in HBase. Submitted same as a patch to HBase (HBASE-7474). • Wrote a HBase table schema metadata management layer in Java. It is used heavily to read and write data in HBase. • Did Data Modeling in HBase. Designed and implemented Secondary Indexes in HBase. • Integrated HBase MapReduce bulk load module with Apache Phoenix. • Active member of HBase and Phoenix mailing list. Winner of CodeJam 2014 at Intuit PSD for Project “KPI metrics for Transactions” by using Storm and HBase. Show less
-
-
-
CityGrid Media
-
United States
-
Technology, Information and Internet
-
1 - 100 Employee
-
Software Engineer
-
Feb 2011 - Nov 2011
1. I worked on ingesting the inbound feed from more than 50 different providers. My focus is on conversion of the inbound feed from csv, xml or any other format to one unified format. The converted files are input to Hadoop based data processing pipeline. 2. Writing Hadoop, Pig jobs on the fly in order to process millions of records for some specific processing. 3. Use Hive to analyze data on Hadoop File System. Buzzword at Work: Java, Oracle, Hadoop, Pig, Hive, BigData 1. I worked on ingesting the inbound feed from more than 50 different providers. My focus is on conversion of the inbound feed from csv, xml or any other format to one unified format. The converted files are input to Hadoop based data processing pipeline. 2. Writing Hadoop, Pig jobs on the fly in order to process millions of records for some specific processing. 3. Use Hive to analyze data on Hadoop File System. Buzzword at Work: Java, Oracle, Hadoop, Pig, Hive, BigData
-
-
-
University at Buffalo
-
United States
-
Higher Education
-
700 & Above Employee
-
Graduate Student
-
Aug 2009 - Dec 2010
My masters was primarily focused on topics of Databases, Data Intensive Computing and Data Integration. MAJOR ACADEMIC PROJECTS • Netflix Movie Recommendation Facebook Application (Java, XML) Develop a Facebook application which can intelligently recommend Netflix movies to Facebook user by using his profile and friends profile information. • K-Means clustering on Hadoop (Java,MapReduce) Develop a data-intensive application with Hadoop framework in which data is clustered by K-Means Clustering. • SQL Query Processor (Java, Windows) Develop a Relational query processor which optimizes given SQL query using greedy algorithms. • Peer-to-Peer File Sharing Software using BitTorrent Protocol v1.0 (C, Sockets, UNIX) Implemented BitTorrent, a peer-to-peer (P2P) file sharing protocol. It facilitates file sharing and transferring. • Implementation of Basic File System (C, UNIX) Made a disk-like secondary storage server and evaluated the performance of various disk IO scheduling algorithms. • Concurrency and Inter-process Communication Models(C, Pthreads, Unix) Using POSIX thread solved inter-process communication problems during concurrent execution of processes. Show less
-
-
-
Commonwealth of Massachusetts
-
United States
-
Government Administration
-
700 & Above Employee
-
Junior Data Analyst (Summer Internship)
-
Jun 2010 - Aug 2010
• Responsible for the ETL processing of a Data Warehouse Project. • Generated analytical data or ad-hoc reports using OLAP functions of SQL. • Developed a Java utility to load database tables from flat files by using SQL*Loader files. • Responsible for the ETL processing of a Data Warehouse Project. • Generated analytical data or ad-hoc reports using OLAP functions of SQL. • Developed a Java utility to load database tables from flat files by using SQL*Loader files.
-
-
-
-
Associate Software Developer
-
Aug 2007 - Aug 2009
Project: AutoDiscovery, a multi-tier application used to discover layer-2, and layer-3 Network devices. • Developed Graphical User Interface (GUI) for AutoDiscovery. • Wrote Entity beans and Stateless session beans. • Used Database and XML to store & retrieve User Configuration data in a distributed environment. • Wrote JUnit test cases, integrated them with nightly build and automated failure reporting by email. • Implemented device filtering based on IP Address, MAC Address, SYS OID, Device Type, and Subnet. • Made several utilities which facilitated standardization of Graphical User Interface. Show less
-
-
Education
-
State University of New York at Buffalo
Master of Science, Computer Science -
Shri G S Institute of Technology & Science
Bachelor of Engineering, Information Technology