$$$
{{ $t($store.state.user.experience_value_in_dollars) }}
Senior
{{ $t($store.state.user.experience_search_name) }}
0
jobs
Strong ETL Data pipeline using tools such as Apache NiFi, Kafka, Spark. Processing data at enterprise level with Spark using Scala, Python and Java.
Medard Sery
,
Hillingdon, United Kingdom
Experience
Other titles
Skills
I'm offering
I have wealth of experience with Hadoop ecosystem. My expertise is in Data Engineering using Spark.
Markets
United Kingdom
Links for more
Once you have created a company account and a job, you can access the profiles links.
Industries
Language
English
Fluently
French
Fluently
Ready for
Larger project
Ongoing relation / part-time
Full time contractor
Available
My experience
2020 - 2020
freelance
Solutions Consultant
CLOUDERA.
Technologies: Hadoop (Hortonworks/Cloudera- CDH/HDP), HDP 2.x, 3.x, CDH 6.x,
Hive, HDFS, HBase, Kerberos, MIT KDC, LDAP, Active Directory, Multicast, AVRO,
ORC, Parquet, Oozie, NiFi, Kafka, Spark, Spark Streaming, Spark Structured Streaming,
Scala, Python, Java, Machine Learning, bash scripting, Kubernetes, AWS Big Data
services
Working with various clients across different domains anywhere in EMEA
Example of projects:
Multinational Energy Company Cyber-Security project
• Spark and Kafka integration to model Data Pipeline and Analytics with Structured
Streaming using Scala.
• Cleansing unstructured data and deploying the resulted structured data to AWS
cloud on S3 buckets
Financial Institution
• Performance tuning and optimisation for Spark-Java Application and Hive
• Mentoring development team and providing guidance on Data Modelling and Big
Data architecture.
Responsibilities:
• Build, upgrade, migrate, configure, tuning & support the administration of CDH,
HDP and HDF clusters for various services on different OS & clouds.
• Securing the cluster environment with Kerberos using AD, Knox, Ranger, Ranger
KMS, SSL/TLS
• Writing spark jobs for data ingestion & design, development & tuning of Hive tables.
• Automating spark jobs using oozie workflows
• Tuning and optimising spark jobs and Hive ORC small files
• Zeppelin security configuration with Apache Livy and LDAP integration
• Building a dashboard with various cluster metrics for business users using Nifi data
flow's & Hive & building zeppelin notebooks for analyzing various cluster metrics.
• Building a Nifi data flow's for ingestion of data from various sources to sinks.
• Integrating the cluster with 3rd party analytics and visualization tools eg Tableau
• Training, supporting business teams for onboarding & supporting the data science
teams.
Hive, HDFS, HBase, Kerberos, MIT KDC, LDAP, Active Directory, Multicast, AVRO,
ORC, Parquet, Oozie, NiFi, Kafka, Spark, Spark Streaming, Spark Structured Streaming,
Scala, Python, Java, Machine Learning, bash scripting, Kubernetes, AWS Big Data
services
Working with various clients across different domains anywhere in EMEA
Example of projects:
Multinational Energy Company Cyber-Security project
• Spark and Kafka integration to model Data Pipeline and Analytics with Structured
Streaming using Scala.
• Cleansing unstructured data and deploying the resulted structured data to AWS
cloud on S3 buckets
Financial Institution
• Performance tuning and optimisation for Spark-Java Application and Hive
• Mentoring development team and providing guidance on Data Modelling and Big
Data architecture.
Responsibilities:
• Build, upgrade, migrate, configure, tuning & support the administration of CDH,
HDP and HDF clusters for various services on different OS & clouds.
• Securing the cluster environment with Kerberos using AD, Knox, Ranger, Ranger
KMS, SSL/TLS
• Writing spark jobs for data ingestion & design, development & tuning of Hive tables.
• Automating spark jobs using oozie workflows
• Tuning and optimising spark jobs and Hive ORC small files
• Zeppelin security configuration with Apache Livy and LDAP integration
• Building a dashboard with various cluster metrics for business users using Nifi data
flow's & Hive & building zeppelin notebooks for analyzing various cluster metrics.
• Building a Nifi data flow's for ingestion of data from various sources to sinks.
• Integrating the cluster with 3rd party analytics and visualization tools eg Tableau
• Training, supporting business teams for onboarding & supporting the data science
teams.
Support, Scala, Integration, Analytics, Bash, Kafka, Architecture, Security, Spark, Hive, Hadoop, Development, Streaming, Science, Visualization, Spark Streaming, Tuning, Energy, Https, Training, Python, Java, Administration, Writing, Machine learning, Data Science, AWS, Big Data, Active Directory, Design, Scripting, Mentoring, Kubernetes, Cloud, Tableau, Optimization, Apache, Onboarding
2018 - 2020
job
Associate
HORTONWORKS.
Consultant
Technologies: Hadoop (Hortonworks HDP), HDP 2.6.3, Hive, HDFS, HBase,
Kerberos, MIT KDC, LDAP, Active Directory, Oozie, NiFi, Kafka, Spark, Linux, bash
scripting
Working with various clients across different domains anywhere in EMEA
Example of project: Government
Responsibilities:
• Build, upgrade, migrate, configure, tuning & support the administration of HDP
cluster
• Day to day tuning, optimization of HDP cluster, troubleshooting of HDP 2.6.3 stack
• Securing the cluster environment with Kerberos using AD, Knox, Ranger, Ranger
KMS, SSL/TLS
• Clearing HBase, Solr and Kafka logs
• Contributed to on-prem customer support by writing technical documentation
Technologies: Hadoop (Hortonworks HDP), HDP 2.6.3, Hive, HDFS, HBase,
Kerberos, MIT KDC, LDAP, Active Directory, Oozie, NiFi, Kafka, Spark, Linux, bash
scripting
Working with various clients across different domains anywhere in EMEA
Example of project: Government
Responsibilities:
• Build, upgrade, migrate, configure, tuning & support the administration of HDP
cluster
• Day to day tuning, optimization of HDP cluster, troubleshooting of HDP 2.6.3 stack
• Securing the cluster environment with Kerberos using AD, Knox, Ranger, Ranger
KMS, SSL/TLS
• Clearing HBase, Solr and Kafka logs
• Contributed to on-prem customer support by writing technical documentation
Administration, Linux, Writing, Active Directory, Scripting, Hadoop, Bash, Kafka, Spark, Hive, Support, Solr, Technical documentation, Tuning, Https
2004 - 2018
job
Head of Computer Science
ELTHORNE PARK HIGH SCHOOL.
Responsibilities:
• Provide strategic direction for the development of Computing.
• Provide all those involved in the teaching of Computing, the support, challenge,
information and development necessary to sustain motivation and secure
improvement in teaching and learning.
• Provide strategic direction for the development of Computing.
• Provide all those involved in the teaching of Computing, the support, challenge,
information and development necessary to sustain motivation and secure
improvement in teaching and learning.
Teaching, Motivation, Support, Development, Science, Direction
2001 - 2004
job
Technical Specialist
IBM.
Responsibilities:
• Provided hardware and software support to EMEA customers
• Good listening skills to understand what problems the user is facing and communication skills to help the user or client resolve the issue
• Provided hardware and software support to EMEA customers
• Good listening skills to understand what problems the user is facing and communication skills to help the user or client resolve the issue
Support, Hardware, Software, Listening Skills
My education
2016
-
2018
University of Westminster
MSc, Big Data Technologies
MSc, Big Data Technologies
1997
-
2001
London Guildhall University
BSc, Computing and Information Systems
BSc, Computing and Information Systems
Medard's reviews
Medard has not received any reviews on Worksome.
Contact Medard Sery
Worksome removes the expensive intermediaries and gives you direct contact with relevant talent.
Create a login and get the opportunity to write to Medard directly in Worksome.
38000+ qualified freelancers
are ready to help you
Tell us what you need help with
and get specific bids from skilled talent in Denmark