How To Become A Machine Learning Engineer: Learning Path

Andrey Nikishaev
Machine Learning World
5 min readAug 19, 2017

--

We will walk you through all the aspects of machine learning from simple linear regressions to the latest neural networks, and you will learn not only how to use them but also how to build them from scratch.

Big part of this path is oriented on Computer Vision(CV), because it’s the fastest way to get general knowledge, and the experience from CV can be simply transferred to any ML area.

We will use TensorFlow as a ML framework, as it is the most promising and production ready.

Learning will be better if you work on theoretical and practical materials at the same time to get practical experience on the learned material.

Also if you want to compete with other people solving real life problems I would recommend you to register on Kaggle, as it could be a good addition to your resume.

Requirement:
Python. You don’t have to be a guru, the basic knowledge will be just fine. For anything else there are manuals)

1. Courses:

1.1 Practical Machine Learning by Johns Hopkins University

1.2 Machine Learning by Stanford University
These first two will teach you the basic things about Data Science and machine learning and will prepare you for a real hard stuff)

1.3 Deep Learning course from Andrew Ng
Good Courses from famous Andrew Ng

1.3 CS231n: Convolutional Neural Networks for Visual Recognition 2017 (2016)
That’s where the party’s starting, it’s one of the best courses that you can find on the Internet about ML & CV. It will not only show you how deep is the rabbit hole, but also will give you good base for further investigation.

1.3 Object Detection with PyTorch
Very good practical course that will tell you the story of object detection with using CNN networks from simplest one to the state-of-the-art models. All lectures in Google Colab.

1.4* Deep Learning by Google
Optional course. You can take only practical part from it.

1.5* CS224d: Deep Learning for Natural Language Processing
Optional course for those who want to work with Natural Language Processing. And yeah, it is also great)

1.6* Deep Learning book
Good handbook which covers many aspects of ML

2. Practical part:

This list consist of many tutorials and projects, that you should try, understand how they work, and think how you can improve them. This list is created to increase your expertise and interest in ML, so don’t be afraid if some of the tasks are hard for you, you can come back to them when you are ready.

2.1. Simple practical course on Tensorflow from Kadenze
2.1. Tensorflow cookbook
2.2. Tensorflow-101 tutorial set

2.3. IBM Code Patterns
Code patterns from IBM which also includes DataScience & Analytics

2.4. Fast Style Transfer Network
This will show how you can use neural network to transfer styles from famous paintings to any photo.

2.5. Image segmentation

2.6. Object detection with SSD
One of the fastest (and also simpler) models for object detection.

2.7. Fast Mask RCNN for object detection and segmentation

2.8. Reinforcement learning
Very useful thing especially if you want to build a robot or the next Dota AI :)

2.9. Magenta project from Google Brain team
Project that aims to creating compelling art and music with the help of neural networks. And the results are remarkable.

2.10. Deep Bilateral Learning for Real-Time Image Enhancement
New awesome algorithm of the photo enhancement from Google

2.11. Self driving-car project
Want to make your car fully automatic? — that’s a good starting point.

3. FAQ

What to do if you are stuck?
First, you must understand that ML it’s not something that 100% precise — most of the cases are just a good guess and tons of tuning iterations. So to come up with some unique idea is very hard in most cases, because of the time and resources you will spend on training the model. So don’t try to figure out solution by yourself — search for papers, projects, people that can help you. The faster you get experience, the better.
Some websites that can help you: http://www.gitxiv.com/, http://www.arxiv-sanity.com/, https://arxiv.org/, https://stackoverflow.com

Why papers do not fully cover the problem or are wrong in some places?
It’s a pity to say, but not all tech guys want to open their work on public, but all of them need publications to get grants and fame. So some of them publish just a part of the material, or make mistakes in formulas. That’s why it’s always better to search for the code and not for the paper. You should think about the papers as an evidence or a fact that certain problem was solved.

Where can I find fresh materials?
I use this two websites http://www.gitxiv.com/, http://www.arxiv-sanity.com/, https://arxiv.org/. First one finds not only a paper, but also a code for it, so it is more practical.

Should I use Cloud or PC/Laptop for computing?
Cloud is the best fit for intense computing of production models. For learning and tests it is much cheaper to use PC/Laptop with CUDA graphic card. For example, I train all models on my laptop with GTX GeForce 960M with 690 CUDA Cores.
Of course, if you have grants for cloud or free money for it, you can use it.

How can I improve tuning of hyperparameters of the models?
The main problem in training is time. You can’t just sit and watch the training stats. For this reason I would recommend you to use Intelligent Grid Search. Basically, just create the sets of hyperparameters and model architecture, which you think can work better and then run them one after another in stream, saving results. Thus you can run training at night and compare results the next day, finding the most promising one.
You can look how this was done in sklearn library: http://scikit-learn.org/stable/modules/grid_search.html

Support

Become a Patron and support our community to make more interesting articles & tutorials

Get interesting articles every day — Subscribe on Telegram Channel

Also recommend to bookmark this article, you will need it as you start practicing :)

--

--