You are currently viewing RESUME – Senior Data Engineer

RESUME – Senior Data Engineer

                                                        Bhagath Gadamsetty

Email: [email protected]

PH: 682-999-8268

Senior Big Data Engineer

PROFESSIONAL SUMMARY

  • Over 8 years of experience in Data Engineering, Data Pipeline Design, Development and Implementation as a Sr. Data Engineer/Data Developer and Data Modeler.
  • Worked with Cloudera: Data migrating from on-prem to Google Cloud Storage (GCS) using Sqoop on cloudera environment.
  • Worked with both CDH and CDP environments in Cloudera Manager.
  • Sqoop: Worked with Sqoop to ingest & retrieve data from various RDBMS like MySQL and Google cloud storage on GCP Dataproc Cluster.
  • Scripting: Wrote production level code querying, and retrieval the data from various RDBMS resources like MySQL and different cloud resources GCS, S3 bucket.
  • Data Pipelines: Designed and developed complex data pipelines and maintained the data quality to which will steam the batch, streaming data from Google cloud pub/sub to Google Big Query.
  • Expert in building Enterprise Data Warehouse or Data warehouse appliances from Scratch using both Kimball and Inmon’s Approach.
  • Experience in working with Excel Pivot and VBA macros for various business scenarios.
  • Experience in Big Data Ecosystem related technologies like Hadoop, HDFS, Map Reduce, Apache, Pig, Spark, Hive, Sqoop, HBase, Terraform, Flume, and Oozie.
  • Good experience in installing, configuring, and administrating Hadoop cluster of major Hadoop distributions Hortonworks, and Cloudera.
  • Experience in batch processing and writing programs using Apache, Spark for handling real-time analytics and real Streaming of data.
  • Good understanding of Zookeeper and Kafka for monitoring and managing Hadoop jobs and used Cloudera CDH4, CDH5 for monitoring and managing Hadoop cluster.
  • Experience in Analysis, Design, Development and Big Data in Scala, Spark, Hadoop, Pig and HDFS
  • Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification and Testing as per Cycle in both Waterfall and Agile methodologies.
  • Strong experience in writing scripts using Python API, PySpark API and Spark API for analyzing data.
  • Extensively used Python Libraries PySpark, Pytest, Pymongo, cxOracle, PyExcel, Power BI, Boto3, Psycopg, embedPy, NumPy and Beautiful Soup.
  • Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python, etc. to do Extract, Load, and Transform data.
  • Hands-on use of Spark and Scala APIs to compare the performance of Spark with Hive and SQL, and Spark SQL to manipulate Data Frames in Scala.
  • Expertise in Python and Scala, user-defined functions (UDF) for Hive and Pig using Python.
  • Experience in developing Map Reduce Programs using Apache Hadoop for analyzing the big data as per the requirement.
  • Hands on Spark MLlib utilities such as including classification, regression, clustering, collaborative filtering, dimensionality reduction.
  • Experience in working with Flume and NiFi for loading log files into Hadoop.
  • Experience in working with NoSQL databases like HBase and Cassandra.
  • Experienced in creating shell scripts to push data loads from various sources from the edge nodes onto the HDFS.
  • Developed a Talend code for S3 Tagging in the process of moving data from source to AWS S3.
  • Involved in development of copying data from S3 to Redshift using Talend
  • Integrated RedShift SSO Cluster with Talend
  • Troubleshoot and maintain ETL/ELT jobs running using Matillion.
  • Good working knowledge of Amazon Web Services (AWS) Cloud Platform which includes services like EC2, S3, VPC, ELB, IAM, DynamoDB, Cloud Front, Cloud Watch, Route 53, Elastic Beanstalk (EBS), Auto Scaling, Security Groups, EC2 Container Service (ECS), Code Commit, Code Pipeline, Code Build, Code Deploy, Athena, Auto Scaling, Security Groups, Red shift, Cloud Formation, Cloud Trail, Ops Works, Kinesis, SQS, SNS, SES.
  • Experience in Data Analysis, Data Profiling, Data Integration, Migration, Data governance and Metadata Management, Master Data Management and Configuration Management.
  • Experienced in creating User/Group Accounts, Federated users and access management to User/Group Accounts using AWS IAM service.
  • Experience with developing and maintaining Applications written for Amazon Simple Storage, AWS Elastic Map Reduce, and AWS Cloud Watch.
  • Extensively worked with Teradata utilities Fast export, and Multi Load to export and load data to/from different source systems including flat files.
  • Experienced in building Automation Regressing Scripts for validation of ETL process between multiple databases like Oracle, SQL Server, Hive, and Mongo DB using Python.
  • Proficiency in SQL across several dialects (we commonly write MySQL, PostgreSQL, Redshift, SQL Server, and Oracle)
  • Developed SQL queries SnowSQL, SnowPipe and Big Data model techniques using Python.
  • Experience in designing star schema, Snowflake schema for Data Warehouse, ODS architecture.
  • Skilled in System Analysis, E-R/Dimensional Data Modeling, Database Design and implementing RDBMS specific features.
  • Good knowledge of Data Marts, OLAP, Dimensional Data Modeling with Ralph Kimball Methodology (Star Schema Modeling, Snow-Flake Modeling for FACT and Dimensions Tables) using Analysis Services.
  • Worked extensively on Azure Active directory and on-premises Active directory.
  • Worked extensively on Azure Function apps and Web Jobs
  • Worked extensively on Azure Cosmo’s DB to connect with different Protocols.
  • Experienced in creating, provisioned different Databricksclusters needed for batch and continuous streaming data processing and installed the required libraries for the clusters.
  • Experienced in working with Azure data lake, Azure Blob used for storage and performed analytics in Azure Synapse Analytics.
  • 5+ years of experience in Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB, Azure HDInsightBig Data Technologies (Hadoop and Apache Spark) and Data bricks.
  • Expertise AWS Lambada function and API Gateway, to submit data via API Gateway that is accessible via

Managed configuration of Web App and Deploy to AWS cloud server through Chef.

  • Experience in implementing medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Data bricks, and NoSQL DB).
  • Implemented nine nodes CDH3 Hadoop cluster on red hat LINUX.
  • Actively participated in helping team members to resolve any technical issue, Troubleshooting, Project Risk & Issue identification and management.
  • Strong analytical and problem-solving skills and the ability to follow through with projects from inception to completion.
  • Ability to work effectively in cross-functional team environments, excellent communication, and interpersonal skills.

 

PROFESSIONAL EXPERIENCE

 

Senior Azure Data Engineer

JP Morgan Chase & Co, Ohio.                                                                                    June 2021 to Present

Responsibilities:

  • Used Azure Data Factory extensively for ingesting data from disparate source systems. Used Azure Data Factory as an orchestration tool for integrating data from upstream to downstream systems.
  • Automated jobs using different triggers (Event, Scheduled and Tumbling) in ADF. Used Cosmos DB for storing catalog data and for event sourcing in order processing pipelines. Designed and developed user defined functions, stored procedures, triggers for Cosmos DB.
  • Worked extensively on Azure Active directory and on-premise Active directory.
  • Worked extensively on Azure Function apps and Web Jobs.
  • Expertise in Data Migration, Data Profiling, Data Ingestion, Data Cleansing, Transformation, Data Import, and Data Export through the use of multiple ETL tools such as Informatica Power BI Centre.
  • Developed and modified Informatica integration according business needs.
  • Developed a Talend code for S3 Tagging in the process of moving data from source to AWS S3.
  • Involved in development of copying data from S3 to Redshift using Talend
  • Integrated RedShift SSO Cluster with Talend
  • Worked with Integration with marketing cloud through web service consumer with retrieve, insert, update and delete methods.
  • Worked extensively on Azure Cosmo’s DB to connect with different Protocols.
  • Created DA specs and Mapping Data flow and provided the details to developer along with HLDs. Created Build definition and Release definition for Continuous Integration and Continuous Deployment.
  • Created Application Interface Document for the downstream to create new interface to transfer and receive the files through Azure Data Share. Creating pipelines, data flows and complex data transformations and manipulations using ADF and PySpark with Data bricks Ingested data in mini-batches and performs RDD transformations on those mini-batches of data by using Spark Streaming to perform streaming analytics in Data bricks.
  • Created, provisioned different Data bricks clusters needed for batch and continuous streaming data processing and installed the required libraries for the clusters.
  • Proficiency in SQL across several dialects (we commonly write MySQL, PostgreSQL, Redshift, SQL Server, and Oracle)
  • Expertise AWS Lambda function, AWS Kinesis, Terraform and API Gateway, to submit data via API Gateway that is accessible via Managed configuration of Web App and Deploy to AWS cloud server through CDATA.
  • Created a Lambda function that acted as a bridge between API Gateway and CDATA.
  • Experience in change implementation, monitoring and troubleshooting of AS Snowflake databases and cluster related issues.
  • Built a custom CRM application using Azure services such as Azure SQL Database, Azure Functions, and Azure App Service, and then deploy it to the Azure cloud.
  • Extracted data from custom CRM in Azure Cloud using CDATA ODBC drivers into Snowflake DB and integrated account data into location data.
  • Stage the API or Kafka Data (in JSON file format) into Snowflake DB by FLATTENing the same for different functional services.
  • Involved in performance tuning of various maps especially for Flat files CDATA content transformation.
  • Expertise in Dimensional Data modeling, Star Schema/Snowflake modeling, FACT & Dimensions tables, Physical & logical data modeling, ERWIN 3., Oracle Designer, Data Integrator.
  • Expertise in using stream processing tools like AWS Kinesis, Apache Strom.
  • Created instances in AWS as well as worked on migration to AWS from data centre.
  • Developed AWS Cloud Formation templates and set up Auto scaling for EC2
  • Created Linked service to land the data from SFTP location to Azure Data Lake. Created numerous pipelines in Azure using Azure Data Factory v2 to get the data from disparate source systems by using different Azure Activities like Move &Transform, Copy, filter, for each, Databricks etc.
  • Created several Data bricks Sparkjobs with PySpark to perform several tables to table operations.
  • Extensively used SQL Server Import and Export Data tool.
  • Created database users, logins and permissions to setup. Working with complex SQL, Stored Procedures, Triggers, and packages in large databases from various servers.
  • Helping team members to resolve any technical issue, Troubleshooting, Project Risk & Issue identification and management.
  • Azure data lake, Azure Blob used for storage and performed analytics in Azure Synapse Analytics.
  • 1+ years of experience in Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB, Azure HDInsightBig Data Technologies (Hadoop and Apache Spark) and Data bricks.
  • Day to-day responsibility includes developing ETL Pipelines in and out of data warehouse, develop major regulatory and financial reports using advanced SQL queries in snowflake.
  • Handling administration for Agile PLM CQ
  • Configuring the Agile PLM according to Lantronix Business processes.

Environment: Azure Cloud, Azure Data Factory (ADF v2), Azure functions Apps, Azure Data Lake, BLOB Storage, SQL server ,AWS, Cloud integration Teradata Utilities, Windows remote desktop, UNIX Shell Scripting, AZURE PowerShell, Data bricks, Python, Erwin Data Modelling Tool, MySQL, Azure Cosmos DB, Terraform, Power BI, Azure Stream Analytics, Azure Event Hub, Azure Machine Learning.

 

Senior Hadoop/Data Engineer

Walgreens, IL                                                                                                             Oct 2019 to May 2021

Responsibilities:

  • Collaborated with Business Analysts, SMEs across departments to gather business requirements, and identify workable items for further development.
  • Assemble large, complex data sets that meet business requirements.
  • Work with stakeholders to assist in the data-related technical issues and support their data infrastructure needs.
  • Worked on migrating data from Azure cloud to Aws Cloud.
  • Automate manual ingest processes and optimize data delivery subject to service level agreements, work with infrastructure on re-design for greater scalability.
  • Designed and developed Spark workflows that extract data from an AWS S3 bucket and apply transformations to it using Scala and Snowflake.
  • Worked extensively on Azure Cosmo’s DB to connect with different Protocols.
  • Worked Application Interface Document for the downstream to create new interface to transfer and receive the files through Azure Data Share. Creating pipelines, data flows and complex data
  • Created DWH, Databases, Schemas, Tables, write SQL queries against
  • Validate the data feed from the source systems to Snowflake DW cloud
  • Integrated and automated data workloads to Snowflake Warehouse.
  • Ensure ETL/ELT’s succeeded and loaded data successfully in Snowflake DB.
  • Created Test cases for Unit Test, System Integration Test and UAT to check the data.
  • Used AWS Lambda to perform data validation, filtering, sorting, or other transformations for every data change in a DynamoDB table and load the transformed data to another data store with heavy user experience.
  • Worked on Amazon Redshift and AWS kinesis data, create data models and extracted Meta Data from Amazon Redshift, AWS, and Elastic Search engine using SQL queries to create reports.
  • Partnered with ETL developers to ensure that data is well cleaned and the data warehouse is up-to-date for reporting purpose by Pig.
  • Selected and generated data into csv files and stored them into AWS S3 by using AWS EC2 and then structured and stored in AWS Redshift.
  • Processed some simple statistical analysis of data profiling like cancel rate, var, skew, kurt of trades, and runs of each stock every day group by 1 min, 5 min, and 15 min.
  • Used PySpark and Pandas to calculate the moving average and RSI score of the stocks and generated them into data warehouse.
  • Involved in integration of Hadoop cluster with spark engine to perform BATCH and GRAPHX
  • Generated report on predictive analytics using Python and Tableau including visualizing model performance and prediction results.
  • Utilized Agile and Scrum methodology for team and project management.
  • Used Git for version control with colleagues.
  • Migrate data from on-premises to AWS storage buckets
  • Developed a python script to transfer data from on-premises to AWS S3.
  • Developed a python script to hit REST API’s and extract data to AWS S3.
  • Implemented real time data streaming pipeline using AWS Kinesis, Lambda, and Dynamo DB as well as deployed AWS Lambda code from Amazon S3 buckets.
  • Worked on Ingesting data by going through cleansing and transformations and leveraging AWS Lambda, AWS Glue and Step Functions.
  • Created yaml files for each data source and including glue table stack creation.
  • Worked on a python script to extract data from Netezza databases and transfer it to AWS S3.
  • Developed a Talend code for S3 Tagging in the process of moving data from source to AWS S3.
  • Involved in development of copying data from S3 to Redshift using Talend
  • Integrated RedShift SSO Cluster with Talend
  • Developed Lambda functions and assigned IAM roles to run python scripts along with various triggers (SQS, Event Bridge, SNS)
  • Created a Lambda Deployment function, and configured it to receive events from S3 buckets
  • Writing UNIX shell scripts to automate the jobs and scheduling cron jobs for job automation using commands with Crontab.
  • Developed various Mappings with the collection of all Sources, Targets, and Transformations using Informatica Designer
  • Developed Mappings using Transformations like Expression, Filter, Joiner and Lookups for better data messaging and to migrate clean and consistent data
  • Design and Develop ETL Processes in AWS Glue to migrate Campaign data from external sources like S3, ORC/Parquet/Text Files into AWS Redshift
  • Data Extraction, aggregations and consolidation of Adobe data within AWS Glue using PySpark
  • Developed the PySprak code for AWS Glue jobs and for EMR. Azure data lake, Azure Blob used for storage and performed analytics in Azure Synapse Analytics.
  • Experience in Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB, Azure HDInsightBig Data Technologies (Hadoop and Apache Spark) and Data bricks.
  • Implemented Apache Airflow for authoring, scheduling and monitoring Data Pipelines.
  • Worked on the Reports module of the project as a developer on MS SQL Server 2005 (using SSRS, TSQL, scripts, stored procedures and views).

 

Environment: Spark (PySpark, SparkSQL, SparkMLIib), Hadoop, Data warehouse, Talend Administrator Console, Talend Big Data Edition, Python 3.x (Scikit-learn, Numpy, Pandas), Tableau 10.1, GitHub, AWS EMR/EC2/S3/Redshift, SQl server, Aws Cloud, Azure Cloud.

 

Data Engineer

Molina healthcare, Bothell, WA                                                                        Aug 2018 to Sep 2019

Responsibilities:

  • Experienced with Cloud Service Providers such as Azure and
  • Design and implement database solutions in Azure SQL Data Warehouse, Azure SQL.
  • Developed multi cloud strategies in better using GCP (for its PAAS) and Azure (for its SAAS).
  • Develop and deploy the outcome using spark and Scala code in Hadoop cluster running on GCP.
  • Develop near real time data pipeline using flume, Kafka and spark stream to ingest client data from their web log server and apply transformation.
  • Migrating an entire oracle database to BigQuery and using of power bi for reporting.
  • Implement medium to large scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Data bricks, and NoSQL DB).
  • Design Setup maintain Administrator the Azure SQL Database, Azure Analysis Service, Azure SQL Data warehouse, Azure Data Factory, Azure SQL Data warehouse
  • Performed ETL operation using SSIS and loaded the data into Secure DB.
  • Migrated SQL database to Azure data Lake, Azure data lake Analytics, Azure SQL Database, Data Bricks, Azure SQL Data warehouse and controlling and granting database access and Migration On-premise databases to Azure Data lake store using Azure Data factory.
  • Expertise in Data Migration, Data Profiling, Data Ingestion, Data Cleansing, Transformation, Data Import, and Data Export through the use of multiple ETL tools such as Informatica Power BI
  • Experience in Developing Spark applications using Spark/PySpark – SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for analysing & transforming the data to uncover insights into the customer usage, consumption patterns, and
  • Skilled dimensional modeling, forecasting using large-scale datasets (Star schema, Snowflake schema), transactional modeling, and SCD (Slowly changing dimension).
  • Worked on Dimensional Data modeling, Star Schema/Snowflake modeling, FACT & Dimensions tables, Physical & logical data modeling, ERWIN 3., Oracle Designer, Data Integrator.
  • Developed scripts to transfer data from FTP server to the ingestion layer using Azure CLI commands.
  • Created Azure HD Insights cluster using PowerShell scripts to automate the
  • Used stored procedure, lookup, execute the pipeline, data flow, copy data, Azure function features in ADF.
  • Used Azure Data Lake storage gen2 to store excel files, parquet files and retrieve user data using Blob API.
  • Worked on Azure data bricks, PySpark, Spark SQL, Azure ADW, and Hive used to load and transform
  • Designed, developed and deployed convergent mediation platform for data collection and billing process using Talend ETL.
  • Used Azure Data Lake as Source and pulled data using Azure Polybase.
  • Azure data lake, Azure Blob used for storage and performed analytics in Azure Synapse
  • 1+ years of experience in Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical services, Azure Cosmos NO SQL DB, Azure HDInsight Big Data Technologies (Hadoop and Apache Spark) and Data bricks.
  • Experience in designing Azure Cloud Architecture and Implementation plans for hosting complex application workloads on MS Azure.
  • Ingested data from RDBMS and performed data transformations, and then export the transformed data to Cassandra as per the business
  • We have developed automated processes for flattening the upstream data from Cassandra, which in JSON format. Used Hive UDFs to flatten the JSON Data.
  • Worked on Data loading into Hive for Data Ingestion history and Data content summary.
  • Created Impala tables and SFTP scripts, and Shell scripts to import data into
  • Created Hive tables and involved in data loading and writing Hive Developed Hive UDFs for rating aggregation.
  • Provided ad-hoc queries and data metrics to the Business Users using Hive, Pig.
  • Did various performance optimizations like using distributed cache for small datasets, partition and bucketing in Hive, doing map side joins
  • Used JIRA for bug tracking and CVS for version control.

 

Environment: PL/SQL, Python, Azure-Data factory, Azure Blob storage, Azure table storage, Azure SQL server, Apache Hive, Apache Spark, MDM, Netezza, Teradata, Terraform, Oracle 12c, SQL Server, Teradata SQL Assistant, Microsoft Word/Excel, Flask, AWS S3, Power BI, AWS Redshift, Snowflake, AWS RDS, Talend Administrator Console, Talend Big Data Edition,  Dynamo DB, Athena, Lambda, MongoDB, Pig, Sqoop, Tableau, Power BI and UNIX, Docker, Kubernetes, GCP, Big Query.

 

Hadoop Developer (Multiple Projects)

Peritus Healthcare Solutions, India                                                                  May 2014 to July 2018

Responsibilities:

  • Migrating an entire oracle database to BigQuery and using of power bi for reporting.
  • Experience in GCP Dataproc, GCS, Cloud functions, BigQuery.
  • Build data pipelines in airflow in CP for ETL related jobs using different airflow operators.
  • Work related to downloading BigQuery data into pandas or Spark data frames for advanced ETL capabilities.
  • Used Talend to load data into our warehouse systems.
  • Created Talend jobs to copy files from one server to another and utilized Talend FTP
  • Worked with google data catalog and other google cloud API’s for monitoring, query and billing related analysis for BigOuery.
  • Used cloud shell SDK in CP to configure the services Data Proc, Storage, BigQuery.
  • Setup Hadoop cluster on Amazon EC2 using whirr for POC.
  • Pulled data from Veeva to Hadoop cluster using CData
  • Involved in performance tunning of various maps especially for Flat files CDATA content transformation.
  • Creating AWS Lambda functions using python for deployment management in AWS and designed, investigated and implemented public facing websites on Amazon Web Services and integrated it with other applications infrastructure.
  • Creating different AWS Lambda functions and API Gateways, to submit data via API Gateway that is accessible via Lambda function.
  • Responsible for Building Cloud Formation, templates for SNS, SOS, Elastic search, Dynamo DB, Lambda, EC2, VPC, RDS, S3,IAM, MySQL Cloud Watch services implementation and integrated with Service Catalog
  • Implemented nine nodes CDH3 Hadoop cluster on Red hat LINUX.
  • Upgraded CDH ecosystems in cluster using CDP.
  • Experienced in installation and configuration of CDH4 in all environments.
  • Installed CDP 7.1 in data center.
  • Experienced in installation of CDH 5 with upgradation from CDH4 to CDH5.
  • Worked on installing cluster commissioning, decommissioning of datanode namenode recovery capacity planning and slots configuration.
  • Migrated from Hadoop Cluster from HDP to CDP 7.1.
  • Worked on Talend Administration Console (TAC) for scheduling jobs and adding users.
  • Resource management of HADOOP Cluster including adding/removing cluster nodes for maintenance and capacity needs Involved in loading data from UNIX file system to HDFS.
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Implemented best income logic using Pig scripts.
  • Implemented test scripts to support test driven development and continuous integration.
  • Responsible to manage data coming from different sources.
  • Installed and configured Hive and also written Hive UDFs.
  • Deployed Hadoop Cluster in POC environment using latest Cloudera CDP.
  • Experience in working on both CDH and CDP
  • Created POC for implementing different use cases such as usability of CDP, Spark, etc.
  • Experienced on loading and transforming of large sets of structured, semi-structured and unstructured data.
  • Cluster coordination services through Zookeeper.
  • Experience in managing and reviewing Hadoop log files.
  • Exported the analysed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.

Environment: Hadoop, HDFS, CDH4, CDP4, CDP5, Cloudera Manager, Hive, Flume HBase Sqoop PIG, MySQL, Ubuntu Zookeeper Amazon EC2 SOLR, GCP, Big Query.

 

TECHNICAL SKILLS

 

Big Data Tools: Hadoop Ecosystem: Map Reduce, Spark 2.3, CDP4, CDP5, Airflow 1.10.8, Nifi 2, HBase 1.2, Hive 2.3, Pig 0.17 Sqoop 1.4, Kafka 1.0.1, Oozie 4.3, Hadoop 3.0, CDH4.

BI Tools: SSIS, SSRS, SSAS.

Data Modeling Tools: Erwin Data Modeler, ER Studio v17

Programming Languages: SQL, PL/SQL, and UNIX.

Methodologies: RAD, JAD, System Development Life Cycle (SDLC), Agile

Cloud Platform: azure, Azure, Google Cloud, Terraform.

Cloud Management: Amazon Web Services (AWS) – EC2, EMR, S3, Redshift, EMR, Lambda, redshift

Databases: Oracle 12c/11g, Power BI, Teradata R15/R14.

OLAP Tools: Tableau, SSAS, Business Objects, and Crystal Reports 9

ETL/Data warehouse Tools: Informatica 9.6/9.1, and Tableau.

Operating System: Windows, UNIX, Sun Solaris

 

EDUCATION:

 

Bachelor of Technology, JNTUH.                                                                                                                          May2014.

Email