$$$
{{ $t($store.state.user.experience_value_in_dollars) }}
Senior
{{ $t($store.state.user.experience_search_name) }}
0
jobs
Senior Data Scientist | Disruptive Engineering Limited - Helping Businesses use AI to achieve their Strategic Objectives
Radu Andrei Nedelcu
,
LONDON
Experience
Other titles
Skills
I'm offering
• Data Scientist and Machine Learning Engineer with experience in leading teams and modelling Financial Data, Natural Language Processing, and Computer Vision.
• Strong experience in the entire Data Science workflow; from breaking down problems into smaller parts, researching solutions, selecting and transforming large datasets, experimenting and selecting the right algorithms, up to deploying algorithms in production.
• Experience with Big Data in Machine Learning using Python Multiprocessing, Spark, Hadoop, as well as experience with writing Production level GPU Code.
• Strong experience in Python/C++ for Linux and Windows.
• Strong experience in the entire Data Science workflow; from breaking down problems into smaller parts, researching solutions, selecting and transforming large datasets, experimenting and selecting the right algorithms, up to deploying algorithms in production.
• Experience with Big Data in Machine Learning using Python Multiprocessing, Spark, Hadoop, as well as experience with writing Production level GPU Code.
• Strong experience in Python/C++ for Linux and Windows.
Markets
United Kingdom
Links for more
Once you have created a company account and a job, you can access the profiles links.
Language
German
Good
English
Fluently
Ready for
Larger project
Ongoing relation / part-time
Full time contractor
Available
My experience
2019 - ?
job
Interim Lead Data Scientist/Senior Data Scientist
Ernst and Young.
Helped the EY Brain Team set up the Machine Learning Practice and produced the first Proofs of Concept for predictions of Mergers & Acquisitions, later on focusing on advancing the models.
• Research of public and internal information on ML Models for Mergers and Acquisitions and idea generation for potential use cases of ML in the M&A process
• Created the first Proof of Concept models for applications of Machine Learning for M&A using Pandas and Random Forests in Scikit-Learn
• Analysed options around Machine Learning Architecture and integration with Engineering Architecture in Azure and selected Databricks as it was the system that would allow for use of Spark for cluster-based data processing, MLFlow for experiment running and tracking and fast deployment into Kubernetes
• Researched and experimented with a number of mechanisms to allow for modelling of imbalanced datasets - Logistic Regression with or without Balanced Weights, Random Forests with or without Balanced Weights, Blagging (Random Forests where Decision Trees use undersampling), undersampling and oversampling
• Implemented a number of best practices in the team such as Random Seed Start in order to get accurate scores of our models
• Analysed multiple data sources and selected complimentary data sources such as CapIQ for financial data, Factiva for news and Oxford Economics for forecasts
• Machine Learning Team Management duties including planning the team's workload, providing guidance on priorities, planning the team structure and size, interviewing and hiring
• Participated in user interviews to help shape both how we build the algorithms and the platform on which they would be run. A simple product and model explainability were key takeaways
• Participated in a number of presentations with the aim of explaining how Machine Learning works and how it could be used to C-level stakeholders
• Research of public and internal information on ML Models for Mergers and Acquisitions and idea generation for potential use cases of ML in the M&A process
• Created the first Proof of Concept models for applications of Machine Learning for M&A using Pandas and Random Forests in Scikit-Learn
• Analysed options around Machine Learning Architecture and integration with Engineering Architecture in Azure and selected Databricks as it was the system that would allow for use of Spark for cluster-based data processing, MLFlow for experiment running and tracking and fast deployment into Kubernetes
• Researched and experimented with a number of mechanisms to allow for modelling of imbalanced datasets - Logistic Regression with or without Balanced Weights, Random Forests with or without Balanced Weights, Blagging (Random Forests where Decision Trees use undersampling), undersampling and oversampling
• Implemented a number of best practices in the team such as Random Seed Start in order to get accurate scores of our models
• Analysed multiple data sources and selected complimentary data sources such as CapIQ for financial data, Factiva for news and Oxford Economics for forecasts
• Machine Learning Team Management duties including planning the team's workload, providing guidance on priorities, planning the team structure and size, interviewing and hiring
• Participated in user interviews to help shape both how we build the algorithms and the platform on which they would be run. A simple product and model explainability were key takeaways
• Participated in a number of presentations with the aim of explaining how Machine Learning works and how it could be used to C-level stakeholders
Idea generation, UP, Processing, Interim, Team management, It, Engineering, Spark, Algorithms, Architecture, Machine learning, M&A, Integration, Management, Deployment, Kubernetes, C, Research, Azure
2017 - 2019
job
Data Scientist and Machine Learning Engineer at Serendipity AI
News Recommendation System.
Helped put in practice a News Classifier and created a Topic/User based News Recommendation System using NLP.
• Continuous design and development of Machine Learning Algorithms and Infrastructure, APIs and System Architecture for a News Recommender.
• Used Named Entity Detectors from Spacy and DbPedia and Jaccard Similarity together with Levehnstein Distance to detect and match named entities in news and other text data
• Developed a new vectorisation method for the detected named entities in text and worked on a mechanism that would qualify their expertise to different topics.
• Deployed Spark, Hadoop and HBase on a cluster of 3 computers in order to speed up Machine Learning Processing
• Developed an ML processing pipeline that would allow information to flow to HBase and inside it in order to process it either locally or in parallel using PySpark. Every stage in the pipeline was designed as a microservice which had access to only an input and an output table. The pipeline also posted updates on Slack and had the option to upload data and models to the cloud
• Implemented a recommendation system using a Neural Network set up as an Autoencoder and Cosine Similarity from Spotify Annoy. The Autoencoder was trained with Keras in order to compress data and produce a smaller similarity index, thus requiring a smaller size server
• Brought to production level an article judging system. The system had a classification service and a training application. I used Celery to train every night and to restart the worker pool of the judging service when new models were available
• Improved the code quality and reduced repeated code across applications written both in Flask and Cherrypy by creating a shared library. Added a logging system based on Python logging that had handlers for local logging and Rollbar
• Created a number of APIs using Flask that ran on AWS and connected to Neo4j, extracted common endpoints into separate services or refactored the APIs to use a single endpoint rather than duplicate queries
• Set up a testing framework that would allow APIs to be tested before and after deployment using Jenkins, and wrote integration tests for the APIs that were deployed in production
• Posted job ads, created technical tests and interviewed applicants
• Continuous design and development of Machine Learning Algorithms and Infrastructure, APIs and System Architecture for a News Recommender.
• Used Named Entity Detectors from Spacy and DbPedia and Jaccard Similarity together with Levehnstein Distance to detect and match named entities in news and other text data
• Developed a new vectorisation method for the detected named entities in text and worked on a mechanism that would qualify their expertise to different topics.
• Deployed Spark, Hadoop and HBase on a cluster of 3 computers in order to speed up Machine Learning Processing
• Developed an ML processing pipeline that would allow information to flow to HBase and inside it in order to process it either locally or in parallel using PySpark. Every stage in the pipeline was designed as a microservice which had access to only an input and an output table. The pipeline also posted updates on Slack and had the option to upload data and models to the cloud
• Implemented a recommendation system using a Neural Network set up as an Autoencoder and Cosine Similarity from Spotify Annoy. The Autoencoder was trained with Keras in order to compress data and produce a smaller similarity index, thus requiring a smaller size server
• Brought to production level an article judging system. The system had a classification service and a training application. I used Celery to train every night and to restart the worker pool of the judging service when new models were available
• Improved the code quality and reduced repeated code across applications written both in Flask and Cherrypy by creating a shared library. Added a logging system based on Python logging that had handlers for local logging and Rollbar
• Created a number of APIs using Flask that ran on AWS and connected to Neo4j, extracted common endpoints into separate services or refactored the APIs to use a single endpoint rather than duplicate queries
• Set up a testing framework that would allow APIs to be tested before and after deployment using Jenkins, and wrote integration tests for the APIs that were deployed in production
• Posted job ads, created technical tests and interviewed applicants
It, UP, Processing, Framework, Production, Neo4j, Server, Keras, Development, Ai, Infrastructure, Testing, NLP, ADS, Flask, Design, Spark, Algorithms, Architecture, Service, Network, Hadoop, Integration, Deployment, Cloud, Jenkins, Training, AWS, Machine learning, Python
2017 - 2017
job
Machine Learning Engineer
Capp and Co.
Set up a Proof of Concept System for Automatic Machine Learning that would allow users to plug in data and automatically train classifiers.
• Researched and integrated an automatic machine learning algorithm picker in Python - looked at auto-sklearn (bayesian optimization for algorithm selection), TPOT (genetic algorithms for feature processing and algorithm selection) and NEAT (genetic algorithms for neural network evolution) and selected auto-sklearn which was built on top of Scikit Learn
• Development of the architecture for experimentation and result visualization for machine learning algorithms using services built with C# - ASP.net Core and Python - Flask which communicate via REST and RabbitMQ
• Built the system's presentation layer using Angular 4
• Wrote a text extraction service from speech using Google Speech to Text API
• Integrated MongoDb and connected all the services to it so that they can save processing results
• Used nginx reverse proxy for basic authentication
• Integrated all the applications in docker with their own private network and docker compose to allow for continuous integration and faster deployment
• Researched and integrated an automatic machine learning algorithm picker in Python - looked at auto-sklearn (bayesian optimization for algorithm selection), TPOT (genetic algorithms for feature processing and algorithm selection) and NEAT (genetic algorithms for neural network evolution) and selected auto-sklearn which was built on top of Scikit Learn
• Development of the architecture for experimentation and result visualization for machine learning algorithms using services built with C# - ASP.net Core and Python - Flask which communicate via REST and RabbitMQ
• Built the system's presentation layer using Angular 4
• Wrote a text extraction service from speech using Google Speech to Text API
• Integrated MongoDb and connected all the services to it so that they can save processing results
• Used nginx reverse proxy for basic authentication
• Integrated all the applications in docker with their own private network and docker compose to allow for continuous integration and faster deployment
Net, Basic, UP, Processing, Visualization, Net core, Google, Feature, Development, RabbitMQ, Flask, Nginx, It, Algorithms, Architecture, Service, Python, Network, ASP, Integration, Deployment, Angular 4, C, REST, .net core, MongoDB, Docker, API, ASP.NET, Machine learning, Angular, .Net
2016 - 2017
job
Research Engineer
Oxehealth.
Led the data engineering team and worked on big data micro-services that would connect cameras installed on site with Oxehealth's data warehouse. Worked on Oxehealth's TechCrunch London demo.
• Design and Development of the Microservices Architecture for video data retrieval from customer sites using ZeroMQ, GRPC and Boost Program Options and Property Tree for C++
• Set up a VPN Network to connect customer deployments to a central data repository using pfSense
• Creating deployment scripts using Python for customer installations
• Design and development of a breathing robot which could replicate different breathing patterns
• UI Upgrades, Systems integration, and Running of Oxehealth's Live Demonstration for Techcrunch's Startup Battlefield.
• Designed and Developed an application that allowed for multiple room monitoring using Qt
• Various processing pipeline improvements using C++
• Various improvements of the build system - library integrations, continuous integration and testing with Jenkins and Python scripts
• Running stand-ups, setting priorities with the team, and reviewing test plans
• Design and Development of the Microservices Architecture for video data retrieval from customer sites using ZeroMQ, GRPC and Boost Program Options and Property Tree for C++
• Set up a VPN Network to connect customer deployments to a central data repository using pfSense
• Creating deployment scripts using Python for customer installations
• Design and development of a breathing robot which could replicate different breathing patterns
• UI Upgrades, Systems integration, and Running of Oxehealth's Live Demonstration for Techcrunch's Startup Battlefield.
• Designed and Developed an application that allowed for multiple room monitoring using Qt
• Various processing pipeline improvements using C++
• Various improvements of the build system - library integrations, continuous integration and testing with Jenkins and Python scripts
• Running stand-ups, setting priorities with the team, and reviewing test plans
Network, LED, UP, Processing, Pfsense, Demo, Patterns, QT, Monitoring, Development, Testing, Data engineering, Engineering, Architecture, Design, Test, Integration, Microservices, Deployment, C, Data Warehouse, UI, Jenkins, Big Data, Research, Video, Python
2016 - 2016
job
Computer Vision Engineer
Meta Vision Systems.
Developed and optimized Computer Vision Algorithms for a Camera and Laser-based Measurement System for Oil Pipes.
• Full stack design and development from image capture and processing to point clouds sent over the network using multiple threads and a Pipeline Architecture in order to measure Large Pipes with Lasers and Cameras
• General Purpose GPU (GPGPU) Programming to accelerate Image Processing Algorithms - convolution and point extraction via new kernels or through OpenCV
• Implemented algorithms such as K-Means and Ordinary Least Squares through OpenCV for finding points of interest and then line fitting
• Design and development of network communication channels for transmission of data, commands, and replies using Type Length Value (TLV) messages and Boost ASIO
• Designed and developed a logging system using Microsoft ETW
• Use of Point Cloud Library (PCL) for surface reconstruction and for visualization of STL files and Point Clouds
• Used Boost Property Tree to implement a configuration file parser that uses JSON files
• Deployed Jenkins for automatic build verification and to run test cases
• C++ code written to be cross-platform (Windows or Linux) using C++ 11 and C++ 14
• Full stack design and development from image capture and processing to point clouds sent over the network using multiple threads and a Pipeline Architecture in order to measure Large Pipes with Lasers and Cameras
• General Purpose GPU (GPGPU) Programming to accelerate Image Processing Algorithms - convolution and point extraction via new kernels or through OpenCV
• Implemented algorithms such as K-Means and Ordinary Least Squares through OpenCV for finding points of interest and then line fitting
• Design and development of network communication channels for transmission of data, commands, and replies using Type Length Value (TLV) messages and Boost ASIO
• Designed and developed a logging system using Microsoft ETW
• Use of Point Cloud Library (PCL) for surface reconstruction and for visualization of STL files and Point Clouds
• Used Boost Property Tree to implement a configuration file parser that uses JSON files
• Deployed Jenkins for automatic build verification and to run test cases
• C++ code written to be cross-platform (Windows or Linux) using C++ 11 and C++ 14
Windows, Processing, Visualization, Stl, OpenCV, Development, Algorithms, Architecture, Network, Design, Computer vision, Test, Cloud, C, Jenkins, JSON, Linux
2013 - 2016
job
Software Engineer in the Secure Systems Group
Qualcomm.
Windows Driver Development (Mobile) for the Contactless Payments Chip using C/C++.
• Ported an Android library to a Windows Driver using C, C++, JNI and Java
• Full software development lifecycle for features for Windows Drivers, Windows Applications, and Android using C, C++, C#, and PlantUML in Visual Studio and Eclipse
• Participated in bring-up activities for new platforms for both Windows and Android
• Trained new team members
• Coordinated with teams across the globe via Scrums in order to achieve expected product quality
• Debugged customer and partner issues and those arising during testing
• Developed a script in PowerShell for improving the team's efficiency
• Launched a study group on Algorithms with the goal of developing our technical knowledge and communication skills
• Advised other teams on Windows Driver Development
• Ported an Android library to a Windows Driver using C, C++, JNI and Java
• Full software development lifecycle for features for Windows Drivers, Windows Applications, and Android using C, C++, C#, and PlantUML in Visual Studio and Eclipse
• Participated in bring-up activities for new platforms for both Windows and Android
• Trained new team members
• Coordinated with teams across the globe via Scrums in order to achieve expected product quality
• Debugged customer and partner issues and those arising during testing
• Developed a script in PowerShell for improving the team's efficiency
• Launched a study group on Algorithms with the goal of developing our technical knowledge and communication skills
• Advised other teams on Windows Driver Development
Java, Android, Software development, C, Visual Studio, PowerShell, Windows, Eclipse, Algorithms, Testing, Development, Software, UP
My education
London Metropolitan University
Bachelors, Electronic & Communications
Bachelors, Electronic & Communications
Radu's reviews
Radu has not received any reviews on Worksome.
Contact Radu Andrei Nedelcu
Worksome removes the expensive intermediaries and gives you direct contact with relevant talent.
Create a login and get the opportunity to write to Radu directly in Worksome.
38000+ qualified freelancers
are ready to help you
Tell us what you need help with
and get specific bids from skilled talent in Denmark