9am.works
Samarth Narula 's profile
Have a job proposal for this talent?

Availability not confirmed

Languages

English

Samarth Narula

Cloud Architect at Cyberender
Hourly rate (non-binding)
90
/hour

Preferred skills to work with

AWS
GCP
Java
Python
Spring Boot
Docker
Kubernetes
Terraform
Git
Microservices

All Skills

Programming Language (2)
Java,
Python
Tool or Software (2)
Spring Boot,
Terraform
Containerization (2)
Docker,
Kubernetes
Version Control (1)
Git

About

I am currently serving as a Cloud Architect for Cyberender, Previously I was working as Technical Architect/Tech Lead at one of the leading Telecom company through Pixeldust Technologies. Where I've architected a Cloud Data Ecosystem designed to cater 31000 end users. My responsibilities included evaluating Low Code SaaS tools, collaborating closely with senior Data Scientists, Data Architects, Data Analysts, and various key stakeholders for research and assessments. I had also crafted in-house software solutions to address complex business challenges.

Prior to this, during my tenure at Atlan, I conceptualized and executed a cutting-edge Release Management System. I architected and implemented multi-cloud with multi-tenancy deployment models, aligning them with their customer coverage and innovation objectives. My role also included identifying areas for optimization of cloud costs.

In my time at Here Technologies, I developed event-driven ETL applications for processing geo json data, set up cloud infrastructure, and CI/CD pipelines for various map products. I was actively involved in working on in-house cutting-edge technology projects, including parallel environments.

During my tenure at Oracle, I successfully transformed a legacy monolithic service into a Microservice-based Architecture and deployed it on AWS Cloud.

While at Accenture, I played a pivotal role in coding end-to-end features for financial market clients, such as Goldman Sachs. Additionally, I significantly contributed to Accenture's success in securing a substantial INR 700 core MasterCard Client contract by constructing a dynamic UI framework. Before this I was working as a Cloud Engineer for Infosprints on a setting up and monitoring for a ITS System on AWS.

Throughout my career, I've gained extensive experience in leading international teams for pioneering technology initiatives. I have a penchant for conducting POCs to foster innovation within my workplace. My contributions include architecting technical solutions and choosing the optimal tech stack for business requirements.

I am also a renowned author of top-rated and best-selling courses on Udemy and Coursera.


Explore my writings on Medium:

https://medium.com/@samarthnarula13


Visit my Youtube Channel:

https://www.youtube.com/@samarthnarula253


Discover my Github Projects:

https://github.com/sam253narula


My Personal Portfolio Website:

https://samarthnarula13.wixsite.com/profile


My Udemy Courses:

https://www.udemy.com/user/samarth-narula-3/


My Coursera Courses:

https://www.coursera.org/instructor/samarth-narula

Experiences

Cloud Architect
Cyberender
May 2024 - Present (1 years and 2 months)

Roles and Responsibilities:

  1. Building Scalable Cloud Solutions and Architectures.
  2. Collaborating with various teams for application development and deployment.
  3. Create or Optimise CICD deployment pipelines
  4. Collaborating with Security Team to ensure Cloud Security.
  5. Create & Monitor infrastructure to ensure uptime.
  6. Translating client requirments into technical deliverables and create JIRA EPIC and stories for other engineers and myself also.
  7. Assist with Cloud Migration Strategy
  8. Create and optimise Release Management Systems
  9. Build DR/BC Plans
  10. Cloud QA
  11. Lead Platform, Application Development and Data Teams.
  12. Evaluate SAAS Products for subscription, do POC, MVP, Write Feature Evaluation, Competitive Analysis, Standard Operating Procedure, Business Justification Documents.
  13. Understand business problems, propose web app solution, build UI/UX get approval to build, then build team, build product with the Full Stack Team, deploy product, demo product to stakeholders. Provide feedback of Team members.
  14. Build Cloud Architecture, Data Architectures, Application Architectures, Microservice Architectures, Low Code product Architectures

Tech Skills: AWS, GCP, Java, Python, Spring Boot and other Spring Frameworks and libraries, Scripting, CICD pipelines with Gitlab, Github workflows etc, DevOps, K8, docker, helm, GitOps, ArgoCD, Automation, Argo Workflows, Low Code SAAS Products, Generative AI and Cloud AI Services.

Cloud Architect
Cyberender
May 2024 - Present (1 years and 2 months)

Work done:

  1. On-boarded python server-less application and assisted Java Based Microservices team members to onboard to In-house cutting edge technology testing framework.

    Tech Stack: Jenkins, Java, Cucumber, Python, Behave, Spring Integration test, AWS (S3, Lambda, EMR, VPC, KMS), Tableau Dashboard to monitor application test maturity status etc.

2)Updated S3 Bucket Policies to fix access.

3) Coded a Python boto3 library utility, its unit test using magic mock library etc deployed on AWS scheduled lambda function to read a file that contains relative paths of file to replicate objects across another S3 location. Handled all edge cases in code.

4) Coded python kms library for a glue job

5) Coded Pyspark Turing library to detokenize thousands to millions of SSN’s in batch of 1000 at a time and by utilising distributed processing with spark for a aws glue job.

6) GCP Administrator: Create SA keys, Granting users access to GCP services, managing IAM and also managing GCP Org policies.

7) Written a Python script to recursively traverse all bitbucket workspace repos and repos inside repo to find and map bitbucket terraform code repo url to its GCP projects. implemented batch processing and rate limit hit delay in logic.

8) Managing Azure active directory groups.

9) Installing XDR using kubernetes demon set on GCP GKE to defeat cyber attack that was in action.

10) Requirement gathering from IBM for IBM OMS infra sizing, setup and deployment on GCP

Senior Java Python AWS Engineer
Capital One
May 2024 - Present (1 years and 2 months)
Technical Architect
Pixeldust Technologies
December 2022 - May 2024 (1 years and 6 months)
India Hook, SC, USA

Client: Deutsche Telekom

Role: Technical Platform Architect Team Lead | Project Manager

Initiative: Customer Data Platform

Building Below Architectures:

  1. GCP DevOps Cloud Architectures
  2. 10x Data Engineering enablement Architectures:
  3. Extendable Tier Based Terraform Architecture

Responsibilities:

  1. Managing a team of 20 engineers and collaborating across with 150 more engineers and business stakeholders, Data Scientist, Engineers from google, Prophecy and many good techie companies.

2)Drive all Tech related decisions end to end for all the umbrella of projects under Cloud Enablement.

3) Collaborate & Identify and assign all task to DevOps & Senior Platform Engineers that needs to be performed for various architectural component migration from on-premise Cloudera based Hadoop Distribution ecosystem to GCP cloud.

4) Collaborate with senior management level & Software Engineers for driving and ensuring proper execution of technical task and exploring and choosing technologies for tech stack.

Work Done:

  1. Analyse and choose the best IAAC Tool in the industry. Compared Terraform vs Pulumi vs Crossplane.
  2. Analyse and choose Git Branching Strategy and set Source Code Repositories structuring standards.
  3. Decide the Terraform profiling strategy. Compared Workspaces vs Branching vs using Terragrunt vs directory structure
  4. Evaluate Kubernetes multi tenancy strategies namespaces vs Vclusters
  5. Evaluate Prophecy io and other similar tools which offers Visual Editor tools for creating Spark Jobs and Airflow workflows like Fivetran, Matillion, Sparkflow, Talend
  6. Combine Visual editor tools like Prophecy for 10x speed Spark Job Creation and Managed Apache AirFlow instance tool like Astronomer.io for easy DAGs creation and good support for underlying infra and BI tools for 10x engineering
  7. Implemented cybersecurity by adding code scanning stages, image vulnerability scanning, auditing and tightening cloud firewall rule & enforcing GCP organisation policies
Technical Architect
Pixeldust Technologies
December 2022 - May 2024 (1 years and 6 months)
India Hook, SC, USA

Client: Deutsche Telekom

Role: Individual Contributor and Team Lead

Initiative: Ingest Web App

Description: Created an RESTful API based microservices which will be used for data ingestion from on-prem system to ingest data into GCP GCS Buckets.


Initiative: GCP Dataplex Collibra Connector Library

Description: Build a Java library which can be used to fetch data from GCP Data Catalog/Dataplex by passing various parameters.


Initiative: Prophecy POC

Description: 1) Installed Prophecy tool onto GCP GKE and setup dataproc cluster, bastion host by leading a team of 2 Cloud Engineers and collaborated with 3 Prophecy Engineers

2) Created various different types of ETL pipelines using Prophecy Tool Visual Editor feature.

3) Evaluated prophecy compatibility for ML Workflows. Collaborated with various Data scientist to learn from their learnings

4) Created these documents: Competitive Analysis, Business justification, Technical Setup and Feature Evaluation Sheet

5) Presented the Prophecy tool to wide set of audience consisting of various Data Ecosystem personas like Senior Data Scientist, Data Analysis, Data Engineers, CDP Architects, Cloud Architects, Telekom Architects, various regional heads.

6) Also collaborated with above set of personas for gaining more insights for feature evaluation and our business use-case validation.

7) Some cool stuff, got before hand previews of Generative AI features and how it makes the usage of Low code tools even easier.

Technical Architect
Pixeldust Technologies
December 2022 - May 2024 (1 years and 6 months)
India Hook, SC, USA

Client: Deutsche Telekom

Role : I wear multiple hats( Cloud Architect/ Product Architect/ Full Stack Cloud DevOps AI Engineer/ Team Lead / Product Owner/ Technical Evangelist)

initiative : Table Classification Product

Business Accelerator: We need to ingest 32k tables from on-prem to Cloud but we can only ingest TKG complaint Data to Cloud.

Solution: I proposed a solution of Web App using which users can set compliance status of tables and define association of tables and delegate ownership of tables to other users.

Tech stack:

Machine Learning: openai

Front end: JavaScript, React, Material UI

Backend: Java, Spring Boot

Database: Neo4J

deployment: GCP Cloud( GKE, GCS, CloudBuild, Jumphost vm, Cloud DNS) and Gitlab CICD, Docker, Kubernetes, Helm, Cloud build, Scaffold, Python script for database data loading after deployment.

Designing; Figma

Builded multiple forms like Login, Register, Compliance, association, delegation, profile, all tables, proposed fancy Open AI feature for data search using GPT model (LLM).

builded gitlab pipelines and GCP Infra for product deployment and then in next iteration builded analytics page, notification flow, smtp integration, request flow and open AI feature.

Optimised Gitlab pipeline by using Cloud Native deployment. Then also sync with business to extend the scope of product for enabling migration of other data hubs in the company.

Delivered the product in 5 months, made the product available for end users by domain name and now I am working as Technical evangelist to help adoption of my product within the company and providing end users support

For this project, I choose a team of 5 Freshers,1 UI/UX, 1 senior Frontend Engineer, 1 Cloud Engineer and successfully turned all these engineers into multi skilled engineers within 3 months, now they are like me, they can wear multiple hats and deliver with confidence any challenging technical task.

I also got work appreciation gift for this work from client on 7th March 2024.

Technical Architect
Pixeldust Technologies
December 2022 - May 2024 (1 years and 6 months)
India Hook, SC, USA

Client: Deutsche Telekom

Project: Landing Zone Index Page

Role: Technical Architect Tech Lead

Description: A Index page with links to all things that 40k end users of the new GCP cloud Hub and Spoke platform will need to onboard themself and start their journey in the new world.

Tech Stack: Java, Spring Boot, React, Material UI library, GCP, Kubernetes, docker, Helm, Gitlab CICD, Terraform.


This project is proposed by client, I am simply architected and coded and lead my team to build the iteration 1 in 10 days.


Iteration 2 : New design idea proposed by client, I worked with the UI/UX Designer and client side Graphics designer for creating wireframes and then implementing it with my client side Engineers.

Completed within 7 days.


Iteration 3:

1)Create a feature to show all Caiman roles each user has on Index page component.

2)Work with AI team and my team to create semantic search AI feature.

Technical Architect
Pixeldust Technologies
December 2022 - May 2024 (1 years and 6 months)
India Hook, SC, USA

Client: Deutsche Telekom

Role: Data Architect Team Lead

Initiative: Setup Licensed Prophecy product for client

Description:

  1. Lead a team of 10 members and collaborate with more than 50 members on client side, Google side and prophecy side for successfully installing the product on GCP Cloud.
  2. GKE sizing for Prophecy installation, Setting up Dataproc clusters with livy installed in it, Cloud Composer Clusters and configuring these clusters for connecting on Prophecy.
  3. Configuration of Prophecy for different application teams on client side.
  4. Architected Gitlab CICD stages like mirror helm chart, mirror images and mirror libraries that prophecy needs and got it implemented correct from assigned Platform Engineer.
  5. Collaborated with security team to enable SSO ( Single Sign On) and to fit prophecy login mechanism with company setup Azure active directory.
  6. Helped setup the Prophecy QA team and created their testing roadmap.
  7. Nominated required set of employees for Prophecy Platform and Data engineering trainings.
  8. Created and Documented Prophecy CICD, IAM and Fabric Cloud Architecture’s
  9. Gave demo to large audience in breakfast sessions to announce the readiness, features useful for us among all features and process to request for access to Prophecy within the company.
  10. Created different types of pyspark ETL’s using Prophecy like reading data from GCS or BigQuery and doing transformations on it using Prophecy Gems and storing the transformed data using target gems into GCS buckets or ad BigQuery Tables.
  11. Testing reading, transforming and storing different data formats using Prophecy like Parquet, csv, avro, delta, orc, json etc.
  12. Requesting new features from Prophecy Team like support for BigLake Tables, support for Dataproc serverless, Ilogged in identity based authorisation support.
  13. Identifying issues with deployed product and collaborating with Prophecy team to resolve it.
Principal Software Engineer
Atlan
April 2022 - November 2022 (8 months)

Startup Company

Job Type: Contract

About the company: Atlan is a collaborative workspace for data teams.

Project : Silo Multi-Tenant Setup on Customer AWS Account Automation

Role: Principal Cloud Engineer

Tech Stack: AWS Services like Cloud formation, SNS, S3, EKS, EC2, Resource Groups, Lambda Functions, SSM Documents then Parameter Store, Secret Manager, RDS, Cloudfront, VPC and then K8, Docker, Helm Charts, Argo CD and Agro Workflows, Github Actions, Java,Loft Vcluster and Polyglot Programming, Scripting


Work Done : Architected and Implemented the Automation. Customers will use a cloud formation template, which will spin all the infra required for the product and also generate kubeconfig file and values.yaml file(with customer public info) and place by executing k8 job while cf template execution and place it into S3 bucket and then cf template will also create a lambda function which will have S3 create event trigger, which will generate a pre-signed URL for the k8 config amd values.yaml file and send it to our public SNS topic and in our account I have written another Lambda which will process this pre-signed url and store the files received data in secret manager and RDS postgres aurora database and then this lambda will trigger a multi-step Argo Workflow:

Step:1 Add customer cluster in our Loft VCluster instance.

Step 2: Create Vcluster in EKS

Step 3: Update cloudfront orgin in customer account access through k8 job using k8 service account which has assumed a customer aws account role with required permissions.

Step 4:Create parameter store in customer parameter store, copies parameters from our parameter store to customer parameter store.

Step 5:create literal file for registering app on our ArgoCD.

Step 6:Deploy Atlan under VCluster: Creates app in ArgoCd and lets it sync the helm charts changes and deploy atlan product.

Completed all of the above in 74 days.

Principal Software Engineer
Atlan
April 2022 - November 2022 (8 months)

Project: AWS Cost Optimisation

Role : Solution Architect

  1. Identified key services in use causing high cost using AWS Quick Sight reports.
  2. Connected with couple of Senior Startup Solution Architect guys from aws company to get more ideas and also attended aws cost optimisation seminars to learn more techniques.
  3. Promptly brought down cost by 800$/day by disabling dev environment EKS CloudWatch Logs.
  4. Recommended the below technics to further bring down the overall cost :
  • Update existing eks nodes to use AMD series Processor across all environments.
  • Move all OnDemand instances to all spot for dev Environment Instances and One Ondemand and rest spot for Production via Terraform Script as Terraform was used for spinning all the infra, if we do directly from aws console it will lead to terraform state file drift. As the state was stores in S3 bucket.
  • To enable logging back in dev environment reduce the logging period and configure lifecycle rules for moving data to glacier storage.
  • For Lambda use ARM based architecture, migrate all existing *86 Architectures to ARM based Architectures
  1. Assigned these task to other engineers in team and ensured proper execution.

RND projects:

  1. POC: Send Cross Account AWS SNS Topic Notification from Lambda Function.
  2. POC: Test and check if we can send cross account sns notification from AWS CloudFormation using AWS CLI, it is possible by using CLI but not through AWS Console.

    Outcome: Raised a AWS Feature request on AWS Cloud Formation RoadMap GitHub Repo to enable sending cross account SNS topic notification using AWS Console.

    All of the above completed in 15 days
Principal Software Engineer
Atlan
April 2022 - November 2022 (8 months)

Project: Create Release Management RoadMap for the Company

Role: Project Owner & Enterprise Architect | One Man Army Role.

Work Done:

  1. Identified Problems with current release processes.
  2. Explored different solutions in market which can be used as Docker Image and Helm Chart Package Repository.
  3. Listed pros and cons for different registry solutions and cost for each Solution.
  4. Finally proposed solutions for Helm Chart Repository & Docker Repository, Plugins and architectural changes to enable proper Semantic Number Versioning of Artefacts in automated fashion by using git action and Sever side git hook and Client side hook in automated fashion by using Husky (NPM Package) and installing git cz plugin. 5. Also Recommended the future enhancements like since we use Github, automated release documentation with Github Pages, Enable Github Community for long term growing collaboration for engineers. 6. While creating this roadmap compared two data transfer cost effective solutions.
  5. Then lead a team of 2 engineers in implementing the proposed architecture during the initial few task, then alone everything
  6. Created various estimated cost reports using AWS Calculator.
  1. Made and executed critical decisions around Private ECR(no caching support), VPC NAT vs VPC PrivateLink connection with EKS for imagepull, Cross Region And Cross Account Replication strategy.
  2. Wrote Github Action Workflow, Scripts for CICD, push image to ECR, pull latest image from ECR, Create ECR repo if not exist, Create Lifecycle policy if not exist for ECR, keep last 10 github tags, delete all tag on feature branch deletion. Semantic Versioning.
  3. Design Strategies with principal of least privilege around ImagePull in eks from ecr, Image push from github workflows using IAM OIDC Roles.
  4. Design and Architect Strategies for rollback mechanism, deployment strategies like blue green, Canary, rolling updates, Caching, Replication. Automated Image Update with ArgoCD image updater etc.
Principal Software Engineer
Atlan
April 2022 - November 2022 (8 months)

Project: Create Loft VCluster Course

Role: Content Creator and Course Instructor

Work Done:

  1. Created Curriculum and Content for the course.
  2. Created Enterprise level Spring Boot Application.
  3. Created kubernetes deployment and load balancer type service yaml files for the spring boot application.
  4. Created AWS EKS Cluster and deployed spring boot application into it.
  5. Created Helm Chart for Spring Boot application.
  6. Deployed loft VCluster into AWS EKS and created VCluster and deployed Spring Boot application into it.

Other Architectural Discussions:

  1. Interactions with Google Employees and our vendor partner for POC on Google Anthos.
  2. Interactions with junior engineers working on different cutting edge new technology initiatives.
  3. Interactions with Amazon software engineers and solutions architects for solving specific architectural problems.
Principal Software Engineer
Atlan
April 2022 - November 2022 (8 months)

General Roles & Responsibilities:

  1. Planning engineering strategies for a company
  2. Implementing process improvements
  3. Managing engineering departments in tasks like research and design
  4. Providing expert advice to other engineers
  5. Determining department goals and creating implementation plans
  6. Creating and managing engineering budgets around estimated AWS service usage, Github Plans and other toolings.
  7. Implementing process improvements
  8. Architecting, Planning, Managing & also Executing when other junior engineers are not available.

Exposure and walkthrough of new technology initiatives around SRE, Security and Automation.

Exposure to kubecost, teleport, Datadog, Rootly etc

Technology Advisor
Qwikskills
September 2021 - February 2023 (1 years and 6 months)
India Hook, SC, USA

We were sponsored by Government of India to uplift rural area students and helped the company scale by guiding their engineers on technical architect and CEO on next steps to scale the business and expansion of business model.

Publisher
Tutorialspoint
August 2020 - December 2022 (2 years and 5 months)

I have published two courses on TutorialsPoint

Link Below:

https://www.tutorialspoint.com/videotutorials/profile/samarth_narula

Software Engineer II
HERE Technologies
August 2020 - March 2022 (1 years and 8 months)
Mumbai, Maharashtra, India

Description: Maps company, has many different products that provides map data in different formats and visualizations.


Product: HDMaps

Description: Created a Java AWS Utilities and Scala AWS utilities


Product: Wall-E Lanes

Description: Created two ETL Projects and pipelines for reading Protobuf data from one proprietary data source then created algorithm for processing this data and creating a geojson data out of it and then finally publishing this data to HERE Technology data catelogs.

Technologies: Java, Protobuf for reducing the size of data, Scala to reduce boilerplate code, Maven, Docker, Docker Compose for sequencing operations while deployment, Spark for parallelism, AWS EC2 for deploying, AWS EC2 builder for creating custom EC2 image, Jenkins, Here OLP pipeline, AWS RDS, Splunk for logging, AKKA for asynchronous communication.


Application: ETL Project

Description: Created a complex algorithmic ETL project to read one MOM format(geo json) data feature and create 3 seperate MOM feature objects out of it and for testing used geojson.tools open source utility tool for verifying MOM(Map Object Model) formatted input and output objects.


Purpose: Seeding Wall-E with lanes data from different data sources, Wall-E will enables highest level of Autonomous Driving.


Application : LEAS

Discription : Created a new environment in the existing complex pipeline, implemented PACT( Project Agreement Consumer Test), consumer side test for asynchronous communication with other microservice which was the provider. Then setup the gitlab pipeline for this new project that I created. Then added a new stage (pact_stage) in existing LEAS pipeline, using the concept of multiple projects pipeline. (basically from one project pipeline, triggering other project pipeline).

Then lead the efforts of PACT consumer side testing by identifying and creating JIRA's and assigning it to my team and reviewing and merging their MR's.

Senior Software Engineer
HERE Technologies
August 2020 - March 2022 (1 years and 8 months)
India Hook, SC, USA

LDPS(Lane Derivation Preparation Service) Project :

Description: This project is part of Lanes Subsystem which has pipe-filter event-driven asynchronous architecture pattern and the core responsibility of this project is to improvise the geometry of lanes before providing it to its down stream service LD (Lane Derivation)

Work done:

  1. Created shell scripts for deployment and un-deployment of kubernetes pods.
  2. Provided DevOps support for this project which involved manual deployment and un-deployment of Flink pods like Task Manager pod, Job Manager pod, orchestration pod Data Hub Connector Pod (Data hub is a RESTful API based application for performing CRUD operation on DB), Streaming Application Pod (This Application constantly polls the DB for data and copies the data to OLP Catalog Topic(which is a Kafka Topic), AI/ML application pods, AWS SQS Pod( Reads and writes data to AWS SQS queues)
  3. Did cleanup which involves cleanup of DB tables and AWS resources for this project.
  4. Created new Environments and its required components like kubernetes namespaces, Database( Created and configured AWS RDS aurora PostgreSQL DB), Splunk Index and configured Log Forwarding.

5)Then Created Gitlab pipeline from scratch to automate deployment, un-deployment and cleanup of all components of LDPS project for all environments.

6) Then automated daily SIT(Standard Integration Test) environment cleanup task by creating a Gitlab Scheduler.

7) Also provided DevOps Support for other Services in Lanes domain which are part of Lanes subsystem in Walle like Monitoring Kubernetes pods health with help of Grafana Dashboards, checking splunk logs etc whenever it was needed.

8) Configured AWS VPC Peering between multiple AWS accounts within the organisation. Also solved VPC peering issues by updating routing tables and confirming correct routing.

9) Used and configured Kubernetes components extensively in this project.

Senior Software Engineer
HERE Technologies
August 2020 - March 2022 (1 years and 8 months)

Project : Onboard Lane Observables to Parallel Environment Framework.

Description : Parallel Environment Framework is our in house cutting edge technologies based framework which enables creation of parallel environments and deployment of all services for all applications across the organisation.It also creates and automatically allocates the resources which are required by the application. Let’s say your application needs AWS RDS Postgres Aurora Database or SQS queue or S3 Bucket. These resources are automatically created and managed by the parallel environment framework once you make your services parallel environment complaint and onboarded them to parallel environment.


Technology Stack: Docker, Kubernetes, Helm, Jinja Templates, Gitlab CiCD, Shell Scipts, OLP, AWS, Database, Kafka, Java, Spring Boot, Flink, Postgres DB


Work done: I had to restructuring all the lane observable projects and then package it and upload it to helm repo first manually then by creating a Gitlab package stage. Restructuring involved updating helm values. yaml files, folder names, Scripts and also upgrading versions of libraries and other application that out java spring boot Flink application connects with. Also had to modify the helm template to make it PE compliant. Then test the onboarded service by locally setting up parallel environment in local by mounting PE Project directory in to running docker image and by writing DSL of PE project to download the uploaded package from helm repo and to template and deploy it in the PE K8 Namespace.


Project : Migration of AWs resources from old account to new AWS account.

Description: Had to create Security Groups, RDS, Do VPC peering with AWS account which has our EKS cluster. Make changes in application to connect with new databases and re-deployed the applications with new database and into new Kubernetes cluster with new namespaces.

Create IAM users with minimum Privilege for other team mates as they were not granted Admin access.

Technical Trainer
MicroStream
April 2020 - December 2023 (3 years and 9 months)

Provided Spring Boot, AWS and Microstream trainings to students worldwide.

Checkout one of the training video on my youtube channel:

https://youtu.be/D9lIIIVtqyY?si=EWxuX1XZ8-pAFvXX

Publisher
Udemy
January 2020 - May 2023 (3 years and 5 months)

I have published two top rated and best selling courses based on Microservices and Design patterns in Java on the most popular MOOC platform Udemy and my courses have more than 60K enrolments


https://www.udemy.com/user/samarth-narula-3/

Guided Project Publisher
Coursera
January 2020 - December 2022 (3 years)

I am publisher of 4 Guided Projects builded using Rhyme Platform based on Microservices with Spring Boot Framework and its integrations with other frameworks on Coursera and my guided projects have more than 3000 enrollments.

https://www.coursera.org/instructor/samarth-narula

Staff Consultant
ORACLE FINANCIAL SERVICES SOFTWARE LIMITED
December 2019 - August 2020 (9 months)

Project Name: Open Trading

Client: State Street Bank

Description : As a Senior Java Developer at State Street Bank, my role is to understand what data is required by traders and portfolio managers to make business decisions and then based on understanding of business requirement pull data from different screens of Bloomberg terminal by using the Bloomberg API's and respective mnemonics and then write algorithms for processing and loading this data to oracle DB and flat files.There were no JUnit test coverage for all their applications, so I wrote JUnits and Mockito for all their applications and fixed many existing data issues. Also Sonar Gate was failed, So I fixed sonar issues and made all their applications pass the sonar quality report. Also fixed broken Jenkins pipelines.


Innovation: Migrated and decoupled all the existing legacy monolithic applications to microservice applications, r eplaced JDBC code with JPA and hibernate to resolve the open connection DB issues. Implemented design patterns where it fitted the best. Got rid of boilerplate code and made application more resilient by using Lombok Framework. Also created API Documentation


Free time ; POT's on Resource Pool

Description: While in resource pool first one month of joining made many POT's on Spring Boot Kafka, Eureka, Docker, AWS EC2, Lambda, Elastic Beanstalk, Code pipeline, S3, DynamoDB, Jmeter, Micro-service Design Patterns like API Gateway, SSGA, CQRS, Event Driven Design. Also researched on architect solutions provided by Microsoft, then did research on Chaos testing tools and methodologies required for Business continuity. Also made corona-virus tracker and deployed it on PCF Cloud for helping people all round the globe.

Course Publisher
Great Learning
January 2019 - December 2021 (3 years)

Created Technical courses on Java, spring boot, AWS and delivered Full Stack technical trainings program for Financial Market banking client.

Associate Software Developer at Goldman Sachs
Accenture
November 2017 - December 2019 (2 years and 2 months)
Mumbai, Maharashtra, India

Project Name : - Trader Mandate

Description : - The Mandate application provides various business user’s with capability to view ,review, edit and approve Mandates. Mandates are documents that prescribe restrictions on the trading activity of desk, by outlining such things as who can trade, what they can trade and for which xyz entities they can book trades.

The applications, workflow and application reporting (mandate PDFs) also serve to document and provide controls and evidence of controls to ensure the firm is acting in accordance with regulatory requirements(Volcker Rule which was announced by honourable president Obama in 2010).


Role & Responsibility:- Development, Enhancement, Maintenance, Deployment.

Software Developer at MasterCard
Accenture
November 2017 - December 2019 (2 years and 2 months)
Mumbai, Maharashtra, India

Best Achivement : Played a critical role for Accenture in winning the mastercard contract by creating a technical POC, then once Accenture got the MasterCard Contract, worked only on the below MasterCard Project.

Project:- Customer Parameter Enablement

Description:- Automate the business onboarding process of banks, processors and other financial entities on MasterCard network by developing a web platform which helps the banks and other firms in faster onboarding by self understanding and filling of all required documents (like Durbin agreement, EMV document) for new accounts or new financial institutions can be onboarded, so that they can perform transactions.


Role's & Responsibility:- Everything from scratch, from development to configurating the cloud for deployment of new microservices. alot of Spring Boot integration POCs which turned out to be implemented in the project eventually from mongoDB, SQL, H2, Camunda, AngularJS, Caching like Redis Cache, Configuration of Mongo Atlas Cloud, Lombok, JPA, Hibernate, Spring Security, Spring Cloud, Test Driven Development,PCF Cloud, AWS cloud, multi-cloud module with even Google OAuth2 based authentication etc.

Cloud Engineer
INFOSPRINTS SOLUTIONS PRIVATE LIMITED
August 2014 - November 2017 (3 years and 4 months)

Role: Cloud Engineer

Description:

  1. Implement and manage AWS cloud infrastructure for ITS, ensuring high availability and performance.
  2. Automate deployment processes using Infrastructure as Code (IaC) tools like AWS CloudFormation and Terraform.
  3. Write Lambda functions with Java and Python.
  4. Integrate various AWS services (e.g., EC2, S3, Lambda, RDS, IoT) to build comprehensive ITS solutions.
  5. Collaborate with software development teams to deploy applications and services in the cloud environment.
  6. Monitor and optimize cloud infrastructure for cost, performance, and security.
  7. Implement and manage monitoring tools (e.g., AWS CloudWatch) to ensure system reliability and efficiency.
  8. Ensure compliance with industry standards and regulations related to transportation systems.
  9. Implement security best practices to protect transportation data and cloud infrastructure.
  10. Work closely with cross-functional teams, including data scientists, developers, and transportation experts, to deliver robust ITS solutions.
  11. Provide technical support and guidance to team members and stakeholders on cloud-related issues.

Education

Mastering Microservices with Java
Mastering Microservices with Java, LinkedIn linkedin.com
June 2019
Git Essential Training: The Basics
Git Essential Training: The Basics, LinkedIn linkedin.com
June 2019
First Look: Java 10 and Java 11
First Look: Java 10 and Java 11, LinkedIn linkedin.com
June 2019
Accenture Green Field Industrial 3 months Training Program
Accenture Green Field Industrial 3 months Training Program, Accenture
February 2018
Java Tutorial Course
Java Tutorial Course, SoloLearn sololearn.com
January 2018
HTML Fundamentals course
HTML Fundamentals course, SoloLearn sololearn.com
October 2016
SQL Fundamentals course
SQL Fundamentals course, SoloLearn sololearn.com
October 2016
Learn Responsive Web Development from Scratch
Learn Responsive Web Development from Scratch , Udemy ude.my
April 2016
Information Technology
Bachelor's degree, University of Mumbai

Skills: Advance java · Amazon Web Services (AWS) · Google Cloud Platform (GCP) · MongoDB · API Development · Cloud Computing · Cloud-Native Applications · Cloud-Native Architecture · Data Engineering · Data Lakes · Data Migration · Data Modeling · Design Patterns · DevOps · Docker Products · Domain Architecture · Full-Stack Development · Java Development · Team Leadership · Team Building · Stakeholder Management · Spring Cloud