Data Scientist @ Alan Turing Institute | ‘Explain Like I’m five’ proponent | Ph.D. Learning Analytics | Oxford & SFU Alumni

Deep Learning

Learn about the different layers that go into a GAN’s architecture, debug some common runtime errors, and develop in-depth intuition behind writing code in PyTorch.

Image for post
Image by Comfreak from Pixabay

In Part 1 on GANs, we started to build intuition regarding what GANs are, why we need them, and how the entire point behind training GANs is to create a generator model that knows how to convert a random noise vector into a (beautiful) almost real image. Since we have already discussed the pseudocode in great depth in Part 1, be sure to check that out as there will be a lot of references to it!

In case you would like to follow along, here is the Github Notebook containing the source code for training GANs using the PyTorch framework.

The whole idea behind training a GAN network is to obtain a Generator network (with most optimal model weights and layers, etc.) …


Bonus intro to keywords like seasonality, trend, autocorrelation, and much more.

Image for post
Source: memegenerator

Right off the bat, time-series data is not your average dataset! You might have worked with housing data wherein each row represents features of a particular house (such as total area, number of bedrooms, year in which it was built) or student dataset wherein each row represents such information about a student (such as age, gender, prior GPA). In all these datasets, the common thing is that all samples (or rows in your dataset), in general, are independent of each other. What sets these datasets apart from time-series data is that in the latter, each row represents a point in time so naturally, there is some inherent ordering to the data. …


DEEP LEARNING FOR BEGINNERS

The basic intuition behind GANs, objective functions, generator and discriminator architectures, and in-depth pseudocode walkthrough.

Image for post
Image by Iván Tamás from Pixabay

Note: Quite frankly, there are already a zillion articles out there explaining the intuition behind GANs. While I will briefly touch upon it, the rest of the article will be an absolute deep dive into the GAN architecture and mainly coding — but with a very very detailed explanation of the pseudocode (open-sourced as an example by PyTorch on Github).

Why do I need GANs?

To put it simply, GANs let us generate incredibly realistic data (based on some existing data). Be it human faces, songs, Simpsons characters, textual descriptions, essay summaries, movie posters — GANs got it all covered!

How does it even work?

To generate realistic images, GANs must know (or more specifically learn) the underlying distribution of data. …


With a bonus sample script at the end that lets you show off your tech skills discreetly!

This is Part 2 of the Interview Question series that I recently started. In Part 1, we talked about another important data science interview question pertaining to scaling your ML model. Be sure to check that out!

Image for post
Pinterest

Interviews can be intimidating, but explaining a project you put your blood and sweat in, shouldn’t be!

A typical open-ended question that often comes up during interviews (both first and second round) is related to your personal (or side) projects. This question can take on many forms, for instance:

  • Can you walk us through a recent project you completed?
  • Can you tell us of a time you were part of a challenging project?
  • What are some interesting projects you have worked on?


AZURE MINI TUTORIALS

It is easier than it looks

Kaggle is a goldmine of amazing datasets when it comes to machine learning projects. Let’s see how we can load one of them into our ML workspace in the azure portal.

Dataset

As part of this tutorial, we will be loading the Human Faces dataset available on kaggle. This is what I used for training GANs from scratch on custom image data.

Procuring Kaggle API key

Get your Kaggle user name and API key. To create a key:

  • Go to your kaggle account → Settings → Account → Create a new API token.
  • A kaggle.json file will be downloaded and it will contain your username and API key. …


MINI AZURE TUTORIALS

And how to install packages in a conda virtual environment...

Step 1: Login to Azure ML studio and create a new notebook and select a compute instance to run the notebook.

Step 2: Open the terminal window (next to the magnifying glass icon for searching file names)

Image for post

Step 3: Create a virtual environment using conda

In the terminal, type the following to create a new environment called newenvtf.

conda create -y --name newenvtf

Step 4: Activate the environment. Again, in the terminal

conda activate newenvtf

You would notice the prompt in terminal changes to (newenvtf) after running the above command:


DEEP LEARNING FOR BEGINNERS

No need to even know what a Conv2d, Maxpool, or Batch Normalization layer does!

Image for post
How quickly can you classify people into age groups based on their hands? You are one-line-of-code away from knowing. Image Source

Disclaimer: I was tempted to write this article mainly because I was unable to find many tutorials that demonstrate how to use AutoKeras with self-collected custom datasets (i.e. datasets other than popular deep learning datasets like MNIST, ImageNet, or CIFAR-10). Additionally, with the latest version of TF rolled out, many functions (used in existing tutorials) are now obsolete and require an update.

TL;DR

1 line of code for people in a hurry:

ImageClassifier(max_trials = 200).fit(x = X, y = y, epochs = 3, validation_split = 0.2)

Introduction

In our previous articles on Deep Learning for Beginners, we learned how to build an image classification model using Pytorch and Tensorflow. For both models, I intentionally avoided going into details of the hyperparameter optimization process or tinkering with the complexity of the network architecture, both of which are useful for improving the accuracy of models. …


DEEP LEARNING FOR BEGINNERS

Learn the basics of creating a neural network in PyTorch

Image for post
Created by Author on Imgflip

I love Keras, there I said it! However…

As an applied data scientist, nothing gives me more pleasure than quickly whipping up a functional neural network with as little as three lines of code! However, as I have begun to delve deeper into the dark web of neural nets, I would like to accept the fact that Pytorch does allow you a much larger control over the architecture of your network.

Given that most of us are pretty comfortable with Keras (if not, see here for a warm intro to Keras), learning to create a similar network in Pytorch (whilst learning Pytorch basics) isn’t challenging at all. …


DEEP LEARNING FOR BEGINNERS

Explaining an end-to-end binary image classification model with MaxPool2D, Conv2D, and Dense layers.

Image for post
Image by Pete Linforth from Pixabay

Welcome to Part 2 of the Neural Network series! In Part 1, we worked our way through an Artificial Neural Network (ANNs) using the Keras API. We talked about Sequential network architecture, activation functions, hidden layers, neurons, etc. and finally wrapped it all up in an end-to-end example that predicted whether loan application would be approved or rejected.

In this tutorial, we will be learning how to create a Convolutional Neural Network (CNN) using the Keras API. To make it more intuitive, I will be explaining what each layer of this network does and provide tips and tricks to ease your deep learning journey. Our aim in this tutorial is to build a basic CNN that can classify images of chest Xrays and establish if it is normal or has pneumonia. …


DEEP LEARNING FOR BEGINNERS

Tips and tricks to create network architecture, train, validate, and save the model and use it to make inferences.

Image for post
Image by Gerd Altmann from Pixabay

Why Keras, not Tensorflow?

If you are asking, “Should I use keras OR tensorflow?”, you are asking the wrong question.

When I first started my deep-learning journey, I kept thinking these two are completely separate entities. Well, as of mid-2017, they are not! Keras, a neural network API, is now fully integrated within TensorFlow. What does that mean?

It means you have a choice between using the high-level Keras API, or the low-level TensorFlow API. High-level APIs provide more functionality within a single command and are easier to use (in comparison with low-level APIs), which makes them usable even for non-tech people. …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store