Anil Gupta

Data Architect at TrueCar, Inc.
  • Claim this Profile
Contact Information
us****@****om
(386) 825-5501
Location
Los Angeles, California, United States, US
Languages
  • Hindi -
  • English -

Topline Score

Topline score feature will be out soon.

Bio

Generated by
Topline AI

You need to have a working account to view this content.
You need to have a working account to view this content.

Credentials

  • Sun Certified Java Programmer 1.5
    Sun Microsystems

Experience

    • United States
    • Technology, Information and Internet
    • 300 - 400 Employee
    • Data Architect
      • Apr 2017 - Present

    • Technical Lead
      • May 2016 - Apr 2017

    • Senior Software Developer
      • Sep 2014 - May 2016

      Building backend of car buying platform using BigData stack.

    • United States
    • Software Development
    • 700 & Above Employee
    • Senior Software Engineer
      • Nov 2011 - Sep 2014

      • Responsible for bootstrapping BigData project in my Business Unit. Conduct POC’s in Hadoop/HBase. • Implemented Oracle like “Order by” sorting feature in HBase. Submitted same as a patch to HBase (HBASE-7474). • Wrote a HBase table schema metadata management layer in Java. It is used heavily to read and write data in HBase. • Did Data Modeling in HBase. Designed and implemented Secondary Indexes in HBase. • Integrated HBase MapReduce bulk load module with Apache Phoenix. • Active member of HBase and Phoenix mailing list. Winner of CodeJam 2014 at Intuit PSD for Project “KPI metrics for Transactions” by using Storm and HBase. Show less

    • United States
    • Technology, Information and Internet
    • 1 - 100 Employee
    • Software Engineer
      • Feb 2011 - Nov 2011

      1. I worked on ingesting the inbound feed from more than 50 different providers. My focus is on conversion of the inbound feed from csv, xml or any other format to one unified format. The converted files are input to Hadoop based data processing pipeline. 2. Writing Hadoop, Pig jobs on the fly in order to process millions of records for some specific processing. 3. Use Hive to analyze data on Hadoop File System. Buzzword at Work: Java, Oracle, Hadoop, Pig, Hive, BigData 1. I worked on ingesting the inbound feed from more than 50 different providers. My focus is on conversion of the inbound feed from csv, xml or any other format to one unified format. The converted files are input to Hadoop based data processing pipeline. 2. Writing Hadoop, Pig jobs on the fly in order to process millions of records for some specific processing. 3. Use Hive to analyze data on Hadoop File System. Buzzword at Work: Java, Oracle, Hadoop, Pig, Hive, BigData

    • United States
    • Higher Education
    • 700 & Above Employee
    • Graduate Student
      • Aug 2009 - Dec 2010

      My masters was primarily focused on topics of Databases, Data Intensive Computing and Data Integration. MAJOR ACADEMIC PROJECTS • Netflix Movie Recommendation Facebook Application (Java, XML) Develop a Facebook application which can intelligently recommend Netflix movies to Facebook user by using his profile and friends profile information. • K-Means clustering on Hadoop (Java,MapReduce) Develop a data-intensive application with Hadoop framework in which data is clustered by K-Means Clustering. • SQL Query Processor (Java, Windows) Develop a Relational query processor which optimizes given SQL query using greedy algorithms. • Peer-to-Peer File Sharing Software using BitTorrent Protocol v1.0 (C, Sockets, UNIX) Implemented BitTorrent, a peer-to-peer (P2P) file sharing protocol. It facilitates file sharing and transferring. • Implementation of Basic File System (C, UNIX) Made a disk-like secondary storage server and evaluated the performance of various disk IO scheduling algorithms. • Concurrency and Inter-process Communication Models(C, Pthreads, Unix) Using POSIX thread solved inter-process communication problems during concurrent execution of processes. Show less

    • United States
    • Government Administration
    • 700 & Above Employee
    • Junior Data Analyst (Summer Internship)
      • Jun 2010 - Aug 2010

      • Responsible for the ETL processing of a Data Warehouse Project. • Generated analytical data or ad-hoc reports using OLAP functions of SQL. • Developed a Java utility to load database tables from flat files by using SQL*Loader files. • Responsible for the ETL processing of a Data Warehouse Project. • Generated analytical data or ad-hoc reports using OLAP functions of SQL. • Developed a Java utility to load database tables from flat files by using SQL*Loader files.

    • Associate Software Developer
      • Aug 2007 - Aug 2009

      Project: AutoDiscovery, a multi-tier application used to discover layer-2, and layer-3 Network devices. • Developed Graphical User Interface (GUI) for AutoDiscovery. • Wrote Entity beans and Stateless session beans. • Used Database and XML to store & retrieve User Configuration data in a distributed environment. • Wrote JUnit test cases, integrated them with nightly build and automated failure reporting by email. • Implemented device filtering based on IP Address, MAC Address, SYS OID, Device Type, and Subnet. • Made several utilities which facilitated standardization of Graphical User Interface. Show less

Education

  • State University of New York at Buffalo
    Master of Science, Computer Science
    2009 - 2010
  • Shri G S Institute of Technology & Science
    Bachelor of Engineering, Information Technology
    2003 - 2007

Community

You need to have a working account to view this content. Click here to join now