Deep Learning, explained to a five-year-old (okay, maybe fifteen-year-old): Data science been really good for a while now at data that can be explained in Excel spreadsheets, i.e. columns and rows: one row per observation, one column per variable. This is called structured data. Deep Learning allows us to create rows of column variables that describe a representation of unstructured data, like images or text. It's as if you had an automatic algorithm that could look through all your images, and create one column based on the likelihood the image contains a cat, another the likelihood it contains a shovel -- without having to tell the algorithm what a cat or shovel is, or what they look like, or determine that there are cats and shovels at all before running the algorithm.Deep Learning is rather math-intensive, and involves neural networks, a family of algorithms that's been around for a long time but has now come into its own. Unlike some skills, you can't learn it as a black box and then slowly come to understand it as you use it. There are foundations you need to acquire; tutorials you need to absorb.
I live in Montreal, which recently hosted its annual Deep Learning Summer School; I couldn`t attend, but I heard great things about the lecture by Université de Sherbrooke's Hugo Larochelle.
There's just one thing; I hate listening to videos. It's why I don't take Coursera classes now that I only have a short commute to work every day. I need to learn at my own pace. And I prefer to read.
So when I realized Larochelle's lecture was based on a series of 92 videos on his YouTube channel, I wrote a Python script to add a black bar beneath them, burn subtitles into it, and take screenshots of every subtitle slide and make a pdf out of it so I can read them. I like to read.
Here's an example screenshot:
I'm sharing the fruits of my labor with you here: Videos with subtitles, pdfs of subtitled screenshots, and Python code I used to make them.
Hugo Larochelle neural network lecture videos
& pdfs with subtitles
These are zip files of subtitled videos and pdfs of screenshots made from Hugo Larochelle's (University of Sherbrooke) YouTube playlist of 92 videos in 10 parts on neural networks.
- Subtitled MP4s for Part 01, Feedforward neural networks [zip, 108.5 MB]
- Subtitled MP4s for Part 02, Training neural networks [zip, 238.6 MB]
- Subtitled MP4s for Part 03, Conditional random fields [zip, 250.2 MB]
- Subtitled MP4s for Part 04, Training CRFs [zip, 106.6 MB]
- Subtitled MP4s for Part 05, Restricted Boltzmann Machine [zip, 169.1 MB]
- Subtitled MP4s for Part 06, Autoencoder [zip, 136.8 MB]
- Subtitled MP4s for Part 07, Deep Learning [zip, 226.8 MB]
- Subtitled MP4s for Part 08, Sparse coding [zip, 152.8 MB]
- Subtitled MP4s for Part 09, Computer vision [zip, 191.4 MB]
- Subtitled MP4s for Part 10, Natural Language Processing [zip, 289.5 MB]
PDFs of screenshots:
- PDFs for Part 01, Feedforward neural networks [zip, 120.9 MB]
- PDFs for Part 02, Training neural networks [zip, 287.0 MB]
- PDFs for Part 03, Conditional random fields [zip, 284.2 MB]
- PDFs for Part 04, Training CRFs [zip, 132.7 MB]
- PDFs for Part 05, Restricted Boltzmann Machine [zip, 189.8 MB]
- PDFs for Part 06, Autoencoder [zip, 161.1 MB]
- PDFs for Part 07, Deep Learning [zip, 307.9 MB]
- PDFs for Part 08, Sparse coding [zip, 200.9 MB]
- PDFs for Part 09, Computer vision [zip, 239.8 MB]
- PDFs for Part 10, Natural Language Processing [zip, 371.6 MB]
- used requests and BeautifulSoup to parse the YouTube playlist;
- used youtube-dl to download the videos and WEBVTT subtitles;
- used pycaption to convert subtitles to SRT format;
- used ffmpeg (from a subprocess call) to add a black letterbox below each video, burn the subtitles into that box and then save png screenshots wherever there was a new subtitle line;
- used imagemagick to bundle pngs into pdfs;
- used zipfile to zip similar files together and deleted the originals.