Parwinderjit Singh
Senior Consultant at Arctiq - A Benchmark Company- Claim this Profile
Click to upgrade to our gold package
for the full feature experience.
Topline Score
Bio
Cliff Conklin
Parwinder was a pleasure to work with. Always willing to step up in a very demanding environment. Parwinder was also tireless in his thrust for knowledge and never stopped trying to gain a deeper and broader understanding of our environment.
Shi Chen
Parwinderjit is a fast learner and always complete assigned tasks on time. He has gained experience in various DevOps best practices such as Azure, K8S, etc. I would recommend him for a DevOps role.
Cliff Conklin
Parwinder was a pleasure to work with. Always willing to step up in a very demanding environment. Parwinder was also tireless in his thrust for knowledge and never stopped trying to gain a deeper and broader understanding of our environment.
Shi Chen
Parwinderjit is a fast learner and always complete assigned tasks on time. He has gained experience in various DevOps best practices such as Azure, K8S, etc. I would recommend him for a DevOps role.
Cliff Conklin
Parwinder was a pleasure to work with. Always willing to step up in a very demanding environment. Parwinder was also tireless in his thrust for knowledge and never stopped trying to gain a deeper and broader understanding of our environment.
Shi Chen
Parwinderjit is a fast learner and always complete assigned tasks on time. He has gained experience in various DevOps best practices such as Azure, K8S, etc. I would recommend him for a DevOps role.
Cliff Conklin
Parwinder was a pleasure to work with. Always willing to step up in a very demanding environment. Parwinder was also tireless in his thrust for knowledge and never stopped trying to gain a deeper and broader understanding of our environment.
Shi Chen
Parwinderjit is a fast learner and always complete assigned tasks on time. He has gained experience in various DevOps best practices such as Azure, K8S, etc. I would recommend him for a DevOps role.
Credentials
-
Splunk Certified Admin 6.x.
SplunkApr, 2018- Nov, 2024 -
Splunk Certified Power User
SplunkApr, 2018- Nov, 2024 -
Splunk Certified User 6.x.
SplunkApr, 2018- Nov, 2024 -
AWS Certified Solutions Architect - Associate
Amazon Web Services (AWS)Feb, 2018- Nov, 2024 -
HashiCorp Certified: Consul Associate
HashiCorpAug, 2021- Nov, 2024 -
Certified Kubernetes Administrator
The Linux FoundationApr, 2020- Nov, 2024 -
HashiCorp Certified: Vault Associate
HashiCorpDec, 2020- Nov, 2024 -
HashiCorp Certified: Terraform Associate
HashiCorpJun, 2020- Nov, 2024
Experience
-
Arctiq - A Benchmark Company
-
Canada
-
IT Services and IT Consulting
-
1 - 100 Employee
-
Senior Consultant
-
Dec 2022 - Present
-
-
-
Lightspeed Commerce
-
Canada
-
Software Development
-
700 & Above Employee
-
Senior Site Reliability Expert
-
Nov 2021 - Nov 2022
• Designed and built advanced cloud-native infrastructure by leveraging on technologies such as Kubernetes, Helm, Terraform, Ansible, GitOps, OPA, Calico, Istio, etc. • Wrote modules for provision AWS and Google Cloud services using Terraform, Ansible, and Packer. Working with Kubernetes (EKS, and GKE)services and manage them in multiple regions. • Wrote the github-action pipeline using terragrunt to test the terraform modules and wrote test cases in Golang • Manage IAC for GCP and… Show more • Designed and built advanced cloud-native infrastructure by leveraging on technologies such as Kubernetes, Helm, Terraform, Ansible, GitOps, OPA, Calico, Istio, etc. • Wrote modules for provision AWS and Google Cloud services using Terraform, Ansible, and Packer. Working with Kubernetes (EKS, and GKE)services and manage them in multiple regions. • Wrote the github-action pipeline using terragrunt to test the terraform modules and wrote test cases in Golang • Manage IAC for GCP and AWS cloud environments using terraform. • Define SLOs and SLAs for service and monitor for burning error budgets • Use ArgoCD to deploy Helm and Kustomize based applications, via GitOps. • Constantly improve stability, scalability, cost-effectiveness, and operational excellence of lightspeed systems • Deliver and contribute on tools written in house in Python. • Install HashiCorp Vault to Google Kubernetes Engine and acts as a point of contact for all architecture decisions. • Ongoing work automating and refactoring cloudy infrastructure • Facilitate various rituals including daily stand-ups, on-call handover, backlog refinements and retrospectives Show less • Designed and built advanced cloud-native infrastructure by leveraging on technologies such as Kubernetes, Helm, Terraform, Ansible, GitOps, OPA, Calico, Istio, etc. • Wrote modules for provision AWS and Google Cloud services using Terraform, Ansible, and Packer. Working with Kubernetes (EKS, and GKE)services and manage them in multiple regions. • Wrote the github-action pipeline using terragrunt to test the terraform modules and wrote test cases in Golang • Manage IAC for GCP and… Show more • Designed and built advanced cloud-native infrastructure by leveraging on technologies such as Kubernetes, Helm, Terraform, Ansible, GitOps, OPA, Calico, Istio, etc. • Wrote modules for provision AWS and Google Cloud services using Terraform, Ansible, and Packer. Working with Kubernetes (EKS, and GKE)services and manage them in multiple regions. • Wrote the github-action pipeline using terragrunt to test the terraform modules and wrote test cases in Golang • Manage IAC for GCP and AWS cloud environments using terraform. • Define SLOs and SLAs for service and monitor for burning error budgets • Use ArgoCD to deploy Helm and Kustomize based applications, via GitOps. • Constantly improve stability, scalability, cost-effectiveness, and operational excellence of lightspeed systems • Deliver and contribute on tools written in house in Python. • Install HashiCorp Vault to Google Kubernetes Engine and acts as a point of contact for all architecture decisions. • Ongoing work automating and refactoring cloudy infrastructure • Facilitate various rituals including daily stand-ups, on-call handover, backlog refinements and retrospectives Show less
-
-
-
Canadian Tire Corporation
-
Canada
-
Retail
-
700 & Above Employee
-
Staff DevOps Engineer
-
Jul 2019 - Jul 2021
• Created multiple Terraform modules with versions enabled to set up and configure the Azure, GCP and hashicorp vault resources. • Created per application terraform workspace and Jenkins pipeline to create the infrastructure and deploy applications on it. • Setup Azure and GCP module pipeline using Terraform ,Hashi Vault , Jenkins, KitchenCI and inspec to validate the module. • Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to… Show more • Created multiple Terraform modules with versions enabled to set up and configure the Azure, GCP and hashicorp vault resources. • Created per application terraform workspace and Jenkins pipeline to create the infrastructure and deploy applications on it. • Setup Azure and GCP module pipeline using Terraform ,Hashi Vault , Jenkins, KitchenCI and inspec to validate the module. • Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Azure Kubernetes Services (AKS) and Google Kubernetes cluster (GKE). • CI/CD best practice with Bitbucket, Confluence and Jenkins • Creating and managing Azure resources (Virtual Machines, App Services, Storage Accounts, Service Bus, Front Door, Application Gateway, SQL databases, Function Apps, Redis, Load Balancers using terraform. • Hand on creating Window/Linux virtual machine images using Jenkins and Packer. • Work closely with the R&D and product development team to deploy their Microservices applications to the Kubernetes cluster. • Help ensure the applications being deployed have quality and user experience in mind. • Monitor Git repositories and ensure developer best-practices • Lead Azure Kubernetes Service (AKS) migration projects • Experience in integrating Grafana and Prometheus on Azure Kubernetes clusters and create multiple dashboard for application teams. • Automate Azure infrastructure with Terraform • Strong hand-on skill on Infrastructure as code (IAC) using Terraform and Packer for Golden Images. Show less • Created multiple Terraform modules with versions enabled to set up and configure the Azure, GCP and hashicorp vault resources. • Created per application terraform workspace and Jenkins pipeline to create the infrastructure and deploy applications on it. • Setup Azure and GCP module pipeline using Terraform ,Hashi Vault , Jenkins, KitchenCI and inspec to validate the module. • Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to… Show more • Created multiple Terraform modules with versions enabled to set up and configure the Azure, GCP and hashicorp vault resources. • Created per application terraform workspace and Jenkins pipeline to create the infrastructure and deploy applications on it. • Setup Azure and GCP module pipeline using Terraform ,Hashi Vault , Jenkins, KitchenCI and inspec to validate the module. • Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Azure Kubernetes Services (AKS) and Google Kubernetes cluster (GKE). • CI/CD best practice with Bitbucket, Confluence and Jenkins • Creating and managing Azure resources (Virtual Machines, App Services, Storage Accounts, Service Bus, Front Door, Application Gateway, SQL databases, Function Apps, Redis, Load Balancers using terraform. • Hand on creating Window/Linux virtual machine images using Jenkins and Packer. • Work closely with the R&D and product development team to deploy their Microservices applications to the Kubernetes cluster. • Help ensure the applications being deployed have quality and user experience in mind. • Monitor Git repositories and ensure developer best-practices • Lead Azure Kubernetes Service (AKS) migration projects • Experience in integrating Grafana and Prometheus on Azure Kubernetes clusters and create multiple dashboard for application teams. • Automate Azure infrastructure with Terraform • Strong hand-on skill on Infrastructure as code (IAC) using Terraform and Packer for Golden Images. Show less
-
-
-
Unravel Data
-
United States
-
Software Development
-
100 - 200 Employee
-
Senior DevOps Engineer
-
Nov 2018 - Mar 2019
• Automate the release pipeline to achieve zero touch deployments using Jenkins • Coordinate/asset developers with establishing and applying appropriate branching, labelling/name conventions using Subversion (SVN) source control. • Experience architecting & delivering cloud migration solution. • Worked on deploying Dockerized JAVA application in AWS as well as writing Docker files. • Setting up and configuring AWS VPC component- Subnet, IGW, Security Groups, EC2 Instances, Elastic… Show more • Automate the release pipeline to achieve zero touch deployments using Jenkins • Coordinate/asset developers with establishing and applying appropriate branching, labelling/name conventions using Subversion (SVN) source control. • Experience architecting & delivering cloud migration solution. • Worked on deploying Dockerized JAVA application in AWS as well as writing Docker files. • Setting up and configuring AWS VPC component- Subnet, IGW, Security Groups, EC2 Instances, Elastic load balancer & NAT, Gateway for elastic mapreduce cluster as well as application & web layer client access. • Deployed Cloudera Hadoop cluster on AWS environment • Secured the Hadoop cluster from unauthorized access by Kerberos, LDAP integration and TLS for data transfer among the cluster node • Hands on experience in AWS Cloud in various AWS services such as Redshift cluster, Route 53 domain configuration. • Migrated an existing on-premises application to AWS. • Continuous coordination with QA team, production support team and deployment team • Implemented test scripts to support test driven development and continuous integration. Show less • Automate the release pipeline to achieve zero touch deployments using Jenkins • Coordinate/asset developers with establishing and applying appropriate branching, labelling/name conventions using Subversion (SVN) source control. • Experience architecting & delivering cloud migration solution. • Worked on deploying Dockerized JAVA application in AWS as well as writing Docker files. • Setting up and configuring AWS VPC component- Subnet, IGW, Security Groups, EC2 Instances, Elastic… Show more • Automate the release pipeline to achieve zero touch deployments using Jenkins • Coordinate/asset developers with establishing and applying appropriate branching, labelling/name conventions using Subversion (SVN) source control. • Experience architecting & delivering cloud migration solution. • Worked on deploying Dockerized JAVA application in AWS as well as writing Docker files. • Setting up and configuring AWS VPC component- Subnet, IGW, Security Groups, EC2 Instances, Elastic load balancer & NAT, Gateway for elastic mapreduce cluster as well as application & web layer client access. • Deployed Cloudera Hadoop cluster on AWS environment • Secured the Hadoop cluster from unauthorized access by Kerberos, LDAP integration and TLS for data transfer among the cluster node • Hands on experience in AWS Cloud in various AWS services such as Redshift cluster, Route 53 domain configuration. • Migrated an existing on-premises application to AWS. • Continuous coordination with QA team, production support team and deployment team • Implemented test scripts to support test driven development and continuous integration. Show less
-
-
-
Verizon
-
United States
-
IT Services and IT Consulting
-
700 & Above Employee
-
Senior Product Development Engineer - 1 (BigData Devops Engineer)
-
Jun 2017 - Nov 2018
• Installing and Upgrading the Cloudera Hadoop cluster using Ansible automation tool to install remotely on the client data centres. • Configure services like HDFS, Zookeeper, Hive, Kafka, Spark, Solr, HBase in Cloudera Manager and setting High-Availability (HA) architectures. • Setting up monitoring tools Splunk, Nagios and Pager duty for Hadoop monitoring and alerting. • Performance tuning of Hadoop clusters and Hadoop MapReduce & Spark jobs routines. • Screen Hadoop cluster job… Show more • Installing and Upgrading the Cloudera Hadoop cluster using Ansible automation tool to install remotely on the client data centres. • Configure services like HDFS, Zookeeper, Hive, Kafka, Spark, Solr, HBase in Cloudera Manager and setting High-Availability (HA) architectures. • Setting up monitoring tools Splunk, Nagios and Pager duty for Hadoop monitoring and alerting. • Performance tuning of Hadoop clusters and Hadoop MapReduce & Spark jobs routines. • Screen Hadoop cluster job performances, Data modelling, Database backup and recovery, Manage and review Hadoop log files. • Installation of Splunk head, Indexer and Forwarders on 500 servers (Linux environment). Create Dashboard Views, Reports and Alerts for events and configure alert mail. Responsible for Scheduling and Automating Database tasks - Jobs, Alerts, Emails, Notification. • Involved in phases of testing for example Integration testing, Smoke testing Performance testing and Load testing. We are also using inbuilt test cases in Jenkins and running those tests on master branch. • In depth knowledge of hardware sizing, capacity management, cluster management and maintenance, performance monitoring and configuration • Expertise in designing, installation, configuration management of Cloudera Hadoop Clusters on AWS Cloud for experiments UAT testing. • Worked with the Docker to package an application with all of its dependencies into a standardized unit for Software Development. • Participate in customer’s Call to execute acceptance testing and understanding the new requirements and then Open Jira tickets to implement them in the upcoming releases. • Identifies opportunities for improvement and makes constructive suggestions for change on the Application side.
-
-
BigData Platform Engineer (BigData Devops Engineer)
-
Oct 2015 - May 2017
Supporting Big Data Platform that can ingest hundreds of terabytes of data that is consumed for business analytics, operational analytics and build big data solutions for various Verizon Business units. • Setup/Manage/Upgrade Hadoop Eco-system (Dev/QA/IT/Prod) • Automation for clusters monitoring and for reoccurring tasks/operations. • Create/Provision/Manage Virtual Machines using Vagrant and VirtualBox. • Troubleshoot and isolate infrastructure issues followed by RCA… Show more Supporting Big Data Platform that can ingest hundreds of terabytes of data that is consumed for business analytics, operational analytics and build big data solutions for various Verizon Business units. • Setup/Manage/Upgrade Hadoop Eco-system (Dev/QA/IT/Prod) • Automation for clusters monitoring and for reoccurring tasks/operations. • Create/Provision/Manage Virtual Machines using Vagrant and VirtualBox. • Troubleshoot and isolate infrastructure issues followed by RCA. Coordinate with the various teams for the timely resolution of the issues. • Working as Cassandra Administrator, has experience in setup and management of multi-DC Cassandra clusters and upgrading Cassandra1.x to Cassandra 2.x, Cassandra jvm tuning. • Use Jenkins CI tool & Chef for auto deployment User & Configuration management. • Creating Subversion repository, and provide Git related support to Developers in troubleshooting their issues. • Responsible for maintaining 24x7 production CDH hadoop clusters running spark, hbase, hive, MapReduce, Kafka with over 500 nodes that can 100 petabytes of data. • Setup and Maintain Hadoop clusters on Microsoft Azure. • Responsible for creation/granting access on Hadoop clusters. • Develop automated shell scripts that are responsible for the data flow between the clusters, monitoring and status reporting.
-
-
-
Guavus
-
United States
-
Software Development
-
1 - 100 Employee
-
Customer Support Engineer
-
Jan 2014 - Sep 2015
• Deploy Guavus application in 50 different customer’s locations via onsite and remotely Specially Verizon Data Canters. • Installation, configuration, administration, and support of large and highly distributed Hadoop Clusters Using Apache Distribution for Data Analytics platform in use at major telecom operators. • Performed on-site and remote installation and configuration of Guavus solutions, administration, and management of the solution via release upgrade and patches. • Cluster… Show more • Deploy Guavus application in 50 different customer’s locations via onsite and remotely Specially Verizon Data Canters. • Installation, configuration, administration, and support of large and highly distributed Hadoop Clusters Using Apache Distribution for Data Analytics platform in use at major telecom operators. • Performed on-site and remote installation and configuration of Guavus solutions, administration, and management of the solution via release upgrade and patches. • Cluster maintenance including adding and removing cluster nodes, cluster monitoring and troubleshooting. • Monitor Hadoop Cluster Availability, Connectivity and Security, Setting up Linux users, groups. • Performance tuning of Hadoop clusters and Hadoop MapReduce routines • Screen Hadoop cluster job performances, Data modelling, Database backup and recovery, Manage and review Hadoop log files. • Responsible for deployment, trouble-shooting and providing technical support of Guavus big-data analytics products/ solutions remotely or on customer site while working closely • with the Engineering teams • Responsible for maintaining reports of installation steps, failures if any and corrective • To understanthed the business needs of clients and deliver the service within given timeline • Visiting and Meeting clients for the new release to do in-house UAT testing of the product. • Conducts regular status meetings with the development team. • Develops lasting relationships with client personnel that improves client ties. • Communicates effectively with clients to identify needs and evaluate alternative technical solutions. • Continually seeks opportunities to increase customer satisfaction and deepen client relationships. • Coordination with internal engineering teams, other vendor(s) engineering, and clients on-site and remote-engineering resources, often in client-supervised time bound maintenance windows, to quickly resolve production issues.
-
-
Engineer Solution Delivery & Support
-
Jan 2012 - Jan 2014
• Installation, configuration, administration, and support of large and highly distributed Hadoop Clusters for Data Analytics platform in use at major telecoms operators. • Performed on-site and remote installation and configuration of Guavus solutions, administration, and management of the solution via release upgrade and patches. • Coordination with internal engineering teams, other vendor(s) engineering, and clients on-site and remote-engineering resources, often in… Show more • Installation, configuration, administration, and support of large and highly distributed Hadoop Clusters for Data Analytics platform in use at major telecoms operators. • Performed on-site and remote installation and configuration of Guavus solutions, administration, and management of the solution via release upgrade and patches. • Coordination with internal engineering teams, other vendor(s) engineering, and clients on-site and remote-engineering resources, often in client-supervised time bound maintenance windows, to quickly resolve production issues. • Cluster maintenance including adding and removing cluster nodes, cluster monitoring and troubleshooting. • Respond to Sev1 through Sev3 events, service and change requests, and issues in client hadoop clusters. Diagnose issue, perform fix, and restart application or service; investigate event/issue using system and application logs, engage development real-time if needed for live fix, continually work with developers to get solutions to production issues. • Debug, remedy, and automate solutions for operational issues in the production environment. • Creation of MOP (method of procedure) for deployment and change requests. • Write custom monitoring scripts (BASH/KSH) to know if application is operating within expected bounds and, if not, fix within SLA. • Conduct user acceptance testing with client and vendor(s) over live meeting plus conference call. • Resolve technical issues after installation working with Guavus and client engineering. Work with customer and Guavus technical resources to troubleshoot problems either on site or remotely, escalating as appropriate. • Working for Tier 1 operators in US, which includes Verizon (Multiple groups), Sprint, ATT and NTT America and successfully install and support end-to-end solution deployment.
-
-
Education
-
Indian Institute of Management, Kozhikode
Executive Post Graduate Certificate in Information Technology Management and Analytics -
Guru Gobind Singh Indraprastha University
Master of Computer Applications - MCA, Computer Engineering