$$$
{{ $t($store.state.user.experience_value_in_dollars) }}
Senior
{{ $t($store.state.user.experience_search_name) }}
0
jobs
Senior Data engineer
Ramachandra g
,
Hounslow, United Kingdom
Experience
Other titles
Skills
I'm offering
data engineer with big data, azure and aws, java j2ee pyspark
Markets
United Kingdom
Links for more
Once you have created a company account and a job, you can access the profiles links.
Language
English
Fluently
Ready for
Larger project
Ongoing relation / part-time
Full time contractor
Available
My experience
2006 - ?
job
Senior Big data Engineer at IMG
Factory V2.
Projects: SAP Finance (Azure Migration)
Environment: Azure (Azure Data Lake Store,Azure DWH, SQL/PLSQL, Azure Data
Factory V2 (Data Pipelines),HDInsight, Azure Databricks- Notebooks, Python/Pyspark,SSMS
Spark SQL, SQL Server WAREHOUSE, Visual Studio, DAX, Power BI, Snowflake), SAP HANA, Data
Modelling-Tabular(Star&Snowflake),AzureDevops,JIRA,API,FTP,ADLS,logicapps,GIT,CosmosDB,
Azure Analysis Services, SSIS, SSAS, SSRS, T-SQL, Hadoop, HDFS, Map Reduce, Spark,SQOOP,
HIVE, HBASE, KAFKA,UNIX Shell, OOZIE,IMPALA,BITBUCKET
Job Profile & Responsibilities:
• Working in Agile environment (Scrum, JIRA) with 2 weeks sprint, involving in backlog meeting, sprint planning meeting, retrospective meeting and daily stand up meeting
• Involved in Design and Development of Data Feed and Data Mart Generation frameworks for pulling the data from JIRA API,FTP,SAP Hana to Azure data lake with Data Factory and Databricks and building ETL pipelines
• Mentoring the team members, peer reviews, unit testing, Integration Testing, user acceptance Testing and Research the issues and adopt the new tools/libraries as per the requirements
• Involved migration of Hadoop on-prem to Cloud Migration(Azure)
• Involved in drawing and building the Tabular Data Model with the help of Visual Studio and generating the reports on Power BI
• Involved in Data Validation, Data Integrity and Data Reconciliation frameworks and production releases using Azure DevOps
• Involved preparation of SSRS Reports, Power BI reports from Tabular model using Azure Analysis Services
• Involved in data preparation for various end users includes Data Analyst(Visualization) Teams, Data Science and Business Users
• Involved in production releases, post production releases, DR activities and support activities.
• Involved in building the data pipelines for Historical data migration, Maintenance activities, automating the processes and working with different stakeholders like testing team, business analysts and end users
Senior Big data Engineer at State Street Corporation
Projects: PXO (Process Excellence Office)
Environment: Hadoop, Map Reduce,YARN, HDFS, Hive, Sqoop, Flume, Oozie,
Spark,Pyspark,Python, Kafka,Cloudera,Hbase, Hue,Cloakware,Java,Eclipse,SVN,DB2,
Teradata,UNIXShell,Rally,SQL, Autosys, Tableau,GIT,BITBUCKET,Sklearn,Spark,SQL,
Jenkins, Oracle,Docker, Airflow ,SQL Server,PostgreSQL,MySQL,Talend, JIRA,
AWS(S3,EC2,EMR, REDSHIFT,RDS, SQS,SNS, Data Pipeline, DynamoDB)
Job Profile & Responsibilities:
• Worked for data ingestion and data transformation and data modelling for new business requirements and Played a significant role in Discovery, Design and development of the IPA,Risk management and MTEX projects
• Designed and Implemented fully automated pipeline solutions for data Ingestion using Sqoop, Kafka, Spark, hive ,OOZIE using AWS EMR cluster and Lambdas.
• Coordinating, Leading onshore and offshore Big data team and mentoring team members in delivering production standard delivery on time.
• Data modelling, Data Transformation / Enriched data, Data Management based on different business use cases using Scala Spark and Python and Involved in designing and implementing of unit testing framework to test process and quality of Data Lake.
• Development phase includes building data pipelines(orchestration) using Spark (Python), Spark SQL and Map-Reduce programs (Java), Hive and Pig which runs on YARN Cluster and Developed Interface (MapReduce/Spark) for the security of data
• Involved in writing shell scripting, writing and configuring the autosys JIL jobs and deployment to different environments and monitoring Hadoop cluster job performance, provision (commission/decommission, capacity scheduling) using Cloudera manager
• Involved in migrating, loading the structured(Teradata, Oracle), unstructured(Excel), semi structured (XML,JSON) data formats by building data pipelines to Hadoop Data Lake platform with SQOOP,Kafka,Flume by following ETL data lake architecture
• Involved in creating Hive tables, loading with data, formation of results(dynamic partitions, buckets), loading historical data(static partition), writing hive queries(HQL) and writing the Hive UDF's
• Followed Agile methodology for development and continuous integration& Deployment (CI/CD)
• Worked on migrating SQL stored procedure (Business use cases) to Hive based queries to work on IPA data lake to produce Tableau reports
• Developed Oozie XML workflows to trigger the job from end to end(actions like shell,sqoop,hive,hive,java) for better data lineage and feed management
• Strong knowledge on Kerberos, Cloakware and LDAP integration which has been used in different environments.
• Extensively worked on Data Pipe line analysis, pipe line design, hive query optimization (partitions, buckets, compressions (snappy,lzo,Gzip), data formats(parquet,ORC)), historical data load, production releases and resolving production issues.
• Handled different web requests by storing into HBASE to handle real time operations read and write) while the operational system is facing any downtime.
• Provided the data for end users for Data Analytics, Reporting and Business and developed Data sensitive process which identifies sensitive data from the incoming feed based meta data and writing the machine learning models using sklearn for forecasting nav
• Involved in Resolving the production code Issues and has good experience on production deployment activities includes peer reviews, release packaging, deployments and post production activities.
Environment: Azure (Azure Data Lake Store,Azure DWH, SQL/PLSQL, Azure Data
Factory V2 (Data Pipelines),HDInsight, Azure Databricks- Notebooks, Python/Pyspark,SSMS
Spark SQL, SQL Server WAREHOUSE, Visual Studio, DAX, Power BI, Snowflake), SAP HANA, Data
Modelling-Tabular(Star&Snowflake),AzureDevops,JIRA,API,FTP,ADLS,logicapps,GIT,CosmosDB,
Azure Analysis Services, SSIS, SSAS, SSRS, T-SQL, Hadoop, HDFS, Map Reduce, Spark,SQOOP,
HIVE, HBASE, KAFKA,UNIX Shell, OOZIE,IMPALA,BITBUCKET
Job Profile & Responsibilities:
• Working in Agile environment (Scrum, JIRA) with 2 weeks sprint, involving in backlog meeting, sprint planning meeting, retrospective meeting and daily stand up meeting
• Involved in Design and Development of Data Feed and Data Mart Generation frameworks for pulling the data from JIRA API,FTP,SAP Hana to Azure data lake with Data Factory and Databricks and building ETL pipelines
• Mentoring the team members, peer reviews, unit testing, Integration Testing, user acceptance Testing and Research the issues and adopt the new tools/libraries as per the requirements
• Involved migration of Hadoop on-prem to Cloud Migration(Azure)
• Involved in drawing and building the Tabular Data Model with the help of Visual Studio and generating the reports on Power BI
• Involved in Data Validation, Data Integrity and Data Reconciliation frameworks and production releases using Azure DevOps
• Involved preparation of SSRS Reports, Power BI reports from Tabular model using Azure Analysis Services
• Involved in data preparation for various end users includes Data Analyst(Visualization) Teams, Data Science and Business Users
• Involved in production releases, post production releases, DR activities and support activities.
• Involved in building the data pipelines for Historical data migration, Maintenance activities, automating the processes and working with different stakeholders like testing team, business analysts and end users
Senior Big data Engineer at State Street Corporation
Projects: PXO (Process Excellence Office)
Environment: Hadoop, Map Reduce,YARN, HDFS, Hive, Sqoop, Flume, Oozie,
Spark,Pyspark,Python, Kafka,Cloudera,Hbase, Hue,Cloakware,Java,Eclipse,SVN,DB2,
Teradata,UNIXShell,Rally,SQL, Autosys, Tableau,GIT,BITBUCKET,Sklearn,Spark,SQL,
Jenkins, Oracle,Docker, Airflow ,SQL Server,PostgreSQL,MySQL,Talend, JIRA,
AWS(S3,EC2,EMR, REDSHIFT,RDS, SQS,SNS, Data Pipeline, DynamoDB)
Job Profile & Responsibilities:
• Worked for data ingestion and data transformation and data modelling for new business requirements and Played a significant role in Discovery, Design and development of the IPA,Risk management and MTEX projects
• Designed and Implemented fully automated pipeline solutions for data Ingestion using Sqoop, Kafka, Spark, hive ,OOZIE using AWS EMR cluster and Lambdas.
• Coordinating, Leading onshore and offshore Big data team and mentoring team members in delivering production standard delivery on time.
• Data modelling, Data Transformation / Enriched data, Data Management based on different business use cases using Scala Spark and Python and Involved in designing and implementing of unit testing framework to test process and quality of Data Lake.
• Development phase includes building data pipelines(orchestration) using Spark (Python), Spark SQL and Map-Reduce programs (Java), Hive and Pig which runs on YARN Cluster and Developed Interface (MapReduce/Spark) for the security of data
• Involved in writing shell scripting, writing and configuring the autosys JIL jobs and deployment to different environments and monitoring Hadoop cluster job performance, provision (commission/decommission, capacity scheduling) using Cloudera manager
• Involved in migrating, loading the structured(Teradata, Oracle), unstructured(Excel), semi structured (XML,JSON) data formats by building data pipelines to Hadoop Data Lake platform with SQOOP,Kafka,Flume by following ETL data lake architecture
• Involved in creating Hive tables, loading with data, formation of results(dynamic partitions, buckets), loading historical data(static partition), writing hive queries(HQL) and writing the Hive UDF's
• Followed Agile methodology for development and continuous integration& Deployment (CI/CD)
• Worked on migrating SQL stored procedure (Business use cases) to Hive based queries to work on IPA data lake to produce Tableau reports
• Developed Oozie XML workflows to trigger the job from end to end(actions like shell,sqoop,hive,hive,java) for better data lineage and feed management
• Strong knowledge on Kerberos, Cloakware and LDAP integration which has been used in different environments.
• Extensively worked on Data Pipe line analysis, pipe line design, hive query optimization (partitions, buckets, compressions (snappy,lzo,Gzip), data formats(parquet,ORC)), historical data load, production releases and resolving production issues.
• Handled different web requests by storing into HBASE to handle real time operations read and write) while the operational system is facing any downtime.
• Provided the data for end users for Data Analytics, Reporting and Business and developed Data sensitive process which identifies sensitive data from the incoming feed based meta data and writing the machine learning models using sklearn for forecasting nav
• Involved in Resolving the production code Issues and has good experience on production deployment activities includes peer reviews, release packaging, deployments and post production activities.
Architecture, Monitoring, Analyst, Teradata, SSAS, Hive, Support, Transformation, Offshore, SSRS, Security, Development, Ssis, Kafka, Spark, Test, T-SQL, Finance, Analytics, Scala, Management, Integration, CosmosDB, CI / CD, UP, Manager, Processes, Pyspark, Framework, Visualization, SVN, Power, Production, Eclipse, Redshift, Server, BEE, Science, Web, Office, Continuous integration, Testing, SAP HANA, Data Science, JSON, PostgreSQL, Azure, Research, Operations, Jira, Oracle, SQL Server, AWS, Docker, XML, Machine learning, Writing, Scrum, API, Git, Java, Excel, Python, Sql, Mysql, Deployment, Visual Studio, Hadoop, DB2, Tableau, Risk Management, Cloud, ETL, Scripting, Mentoring, Unix, Design, Data management, Packaging, Agile, DevOps, Power BI, Jenkins, SAP, Big Data, Forecasting
2014 - 2018
job
Big Data Engineer
Bank of America.
Projects: ALPS, IaaS, Interact
Environment: Hadoop, Map Reduce,YARN, HDFS, Hive, Sqoop, Flume, Oozie,
Spark, Scala,Kafka, Alteryx, Hue UI, Java, Eclipse, SVN, DB2, Netezza, Teradata, HBase
UNIX Shell, Rally, Cloudera Distribution,Autosys,Tableau,GIT,BITBUCKET,GCP(BigQuery, Cloud Dataflow, Cloud Dataproc, Cloud Composer)
Job Profile & Responsibilities:
• Involved in migrating data pipelines from Netezza to Hadoop Data Lake platform by following ETL data lake architecture for structured(DB2,Teradata), unstructured(Excel), semi structured data(JSON&XML)
• Performed Architecture and System design, System Analysis, Programming with object oriented languages like JAVA with distributed framework Hadoop, Spark
• Research on various technical aspects to find the right tools, right data structure to be used and open source technologies for better, cost effective system development
• Participate in designing of multi-tier solution architecture for applications and Resolve technical and functional issues in the application
• Implemented the Data Quality and Data Integrity framework for Data validations and Integrity in Hadoop data platform with Shell and Hive
• Involved in writing the different pipelines for Data extraction from different sources using Sqoop, Hive, FTP, Flume and Kafka to Hadoop platform
• Involved in analysis, design and implementing the ETL pipeline(Migration) to Hadoop
• Writing/configuring autosys jobs(JIL),writing the shell scripts, Oozie work flows and Integrating Hadoop ecosystems(hive,sqoop,flume etc)
• Historical data migration from different source systems (Teradata,DB2,Netezza) to hadoop data platform and validating the data by co coordinating with testing team.
• Release activities includes starting from merging the code to master branch, getting approval from different stake holders (product owners, business analysts, testers, operations), deployment of code, application support and maintenance activities including bug fixes , re-running the data loads for adhoc requests and resolve the data issues
• Developed machine learning clustering model with Python for MSA and deployed to production
• Mentoring the team members and giving the solutions for the production issues and exploring the options to migrate cloud platform and provided the technical architecture and solutions
• Co-ordinated different teams includes testing team, business analysts, Release Management Team(DevOps) and risk management teams/business leaders and make sure product deployment success
• Involved in creating Hive tables, loading with data, formation of results and writing hive queries and writing the UDF's for Hive and Implemented the Interface for encrypting and decrypting the data with Open source Java API
Environment: Hadoop, Map Reduce,YARN, HDFS, Hive, Sqoop, Flume, Oozie,
Spark, Scala,Kafka, Alteryx, Hue UI, Java, Eclipse, SVN, DB2, Netezza, Teradata, HBase
UNIX Shell, Rally, Cloudera Distribution,Autosys,Tableau,GIT,BITBUCKET,GCP(BigQuery, Cloud Dataflow, Cloud Dataproc, Cloud Composer)
Job Profile & Responsibilities:
• Involved in migrating data pipelines from Netezza to Hadoop Data Lake platform by following ETL data lake architecture for structured(DB2,Teradata), unstructured(Excel), semi structured data(JSON&XML)
• Performed Architecture and System design, System Analysis, Programming with object oriented languages like JAVA with distributed framework Hadoop, Spark
• Research on various technical aspects to find the right tools, right data structure to be used and open source technologies for better, cost effective system development
• Participate in designing of multi-tier solution architecture for applications and Resolve technical and functional issues in the application
• Implemented the Data Quality and Data Integrity framework for Data validations and Integrity in Hadoop data platform with Shell and Hive
• Involved in writing the different pipelines for Data extraction from different sources using Sqoop, Hive, FTP, Flume and Kafka to Hadoop platform
• Involved in analysis, design and implementing the ETL pipeline(Migration) to Hadoop
• Writing/configuring autosys jobs(JIL),writing the shell scripts, Oozie work flows and Integrating Hadoop ecosystems(hive,sqoop,flume etc)
• Historical data migration from different source systems (Teradata,DB2,Netezza) to hadoop data platform and validating the data by co coordinating with testing team.
• Release activities includes starting from merging the code to master branch, getting approval from different stake holders (product owners, business analysts, testers, operations), deployment of code, application support and maintenance activities including bug fixes , re-running the data loads for adhoc requests and resolve the data issues
• Developed machine learning clustering model with Python for MSA and deployed to production
• Mentoring the team members and giving the solutions for the production issues and exploring the options to migrate cloud platform and provided the technical architecture and solutions
• Co-ordinated different teams includes testing team, business analysts, Release Management Team(DevOps) and risk management teams/business leaders and make sure product deployment success
• Involved in creating Hive tables, loading with data, formation of results and writing hive queries and writing the UDF's for Hive and Implemented the Interface for encrypting and decrypting the data with Open source Java API
Open source, Solution architecture, DB2, Hadoop, Eclipse, Management, Scala, Spark, Kafka, Architecture, Data quality, Tableau, Support, Hive, Teradata, System Design, Development, Testing, Alteryx, Production, SVN, Framework, XML, Python, Excel, Java, Git, API, Writing, Machine learning, Operations, Research, JSON, Design, Big Data, DevOps, Deployment, UI, Unix, Mentoring, ETL, Cloud, Risk Management
2006 - 2011
job
Software Engineer at CSC
Fidelity ,Chevron.
Projects: Visa Enrollment Manager,OPCert,VisaInfo 2.1,TPSS,XES,DPS,MFA
Environment: JAVA,Servlets,JSP,JavaScript,AJAX,Spring,Hibernate,XML,
WebServices,DB2 8.2, XES,IFX, Rational Clear case, Maven,Junit,Struts,Tomcat,
Websphere,Struts,JSF,Portlets
Job Profile & Responsibilities:
• Designed and developed the interfacing code with other systems through web services in both soap and REST and Provide technical assistance and consultation to business user and mentor support team members of technical staff.
• Worked on Servlets, Spring Framework in implementing business logic to interact with database and Involved in Stored Procedure Implementation and Consumed the Stored procedures with java onto Database
• Developed Restful Web services using Top down Approach and Bottom up approach based on different Requirements and Involved in Configured JDBC and extensively used JNDI with Connection pooling
• Spring Web Flow was being implemented to manage the navigation among the web pages.
• Developed data access layer using Singleton, Data Access Object (DAO), Session Façade, and Business Delegate for different stories.
• Worked on different user stories and implemented the solution using front end technologies like AngularJS, AJAX and JQuery and Backend using Spring Framework.
• Involved in Code Integration, Code Reviews and user acceptance testing
• Involved in production releases and post deployment activities
• Lead in project development process including solution design, solution development, integrated system testing, user acceptance testing support and technical documentation.
• Prepare High Level Design that Communicate with project stakeholders, internal customers and higher management regarding the status of the project. Deliver technical specifications and low level design.
Environment: JAVA,Servlets,JSP,JavaScript,AJAX,Spring,Hibernate,XML,
WebServices,DB2 8.2, XES,IFX, Rational Clear case, Maven,Junit,Struts,Tomcat,
Websphere,Struts,JSF,Portlets
Job Profile & Responsibilities:
• Designed and developed the interfacing code with other systems through web services in both soap and REST and Provide technical assistance and consultation to business user and mentor support team members of technical staff.
• Worked on Servlets, Spring Framework in implementing business logic to interact with database and Involved in Stored Procedure Implementation and Consumed the Stored procedures with java onto Database
• Developed Restful Web services using Top down Approach and Bottom up approach based on different Requirements and Involved in Configured JDBC and extensively used JNDI with Connection pooling
• Spring Web Flow was being implemented to manage the navigation among the web pages.
• Developed data access layer using Singleton, Data Access Object (DAO), Session Façade, and Business Delegate for different stories.
• Worked on different user stories and implemented the solution using front end technologies like AngularJS, AJAX and JQuery and Backend using Spring Framework.
• Involved in Code Integration, Code Reviews and user acceptance testing
• Involved in production releases and post deployment activities
• Lead in project development process including solution design, solution development, integrated system testing, user acceptance testing support and technical documentation.
• Prepare High Level Design that Communicate with project stakeholders, internal customers and higher management regarding the status of the project. Deliver technical specifications and low level design.
Technical documentation, Stored procedures, Hibernate, Jsp, Tomcat, Implementation, Support, Restful, Development, Testing, SOAP, Software, Web, Production, Logic, Framework, Backend, Manager, UP, REST, Design, Jquery, Java, Backend, XML, AngularJS, Deployment, Database, Javascript, AJAX, Mentor, User stories, DB2, Web Services, Spring, Integration, Management
My education
n/a
Masters, Technology
Masters, Technology
Ramachandra's reviews
Ramachandra has not received any reviews on Worksome.
Contact Ramachandra g
Worksome removes the expensive intermediaries and gives you direct contact with relevant talent.
Create a login and get the opportunity to write to Ramachandra directly in Worksome.
38000+ qualified freelancers
are ready to help you
Tell us what you need help with
and get specific bids from skilled talent in Denmark