In Part 1 on GANs, we started to build intuition regarding what GANs are, why we need them, and how the entire point behind training GANs is to create a generator model that knows how to convert a random noise vector into a (beautiful) almost real image. Since we have already discussed the pseudocode in great depth in Part 1, be sure to check that out as there will be a lot of references to it!
In case you would like to follow along, here is the Github Notebook containing the source code for training GANs using the PyTorch framework.
The whole idea behind training a GAN network is to obtain a Generator network (with most optimal model weights and layers, etc.) …
Right off the bat, time-series data is not your average dataset! You might have worked with housing data wherein each row represents features of a particular house (such as total area, number of bedrooms, year in which it was built) or student dataset wherein each row represents such information about a student (such as age, gender, prior GPA). In all these datasets, the common thing is that all samples (or rows in your dataset), in general, are independent of each other. What sets these datasets apart from time-series data is that in the latter, each row represents a point in time so naturally, there is some inherent ordering to the data. …
Note: Quite frankly, there are already a zillion articles out there explaining the intuition behind GANs. While I will briefly touch upon it, the rest of the article will be an absolute deep dive into the GAN architecture and mainly coding — but with a very very detailed explanation of the pseudocode (open-sourced as an example by PyTorch on Github).
To put it simply, GANs let us generate incredibly realistic data (based on some existing data). Be it human faces, songs, Simpsons characters, textual descriptions, essay summaries, movie posters — GANs got it all covered!
To generate realistic images, GANs must know (or more specifically learn) the underlying distribution of data. …
This is Part 2 of the Interview Question series that I recently started. In Part 1, we talked about another important data science interview question pertaining to scaling your ML model. Be sure to check that out!
A typical open-ended question that often comes up during interviews (both first and second round) is related to your personal (or side) projects. This question can take on many forms, for instance:
Kaggle is a goldmine of amazing datasets when it comes to machine learning projects. Let’s see how we can load one of them into our ML workspace in the azure portal.
Dataset
As part of this tutorial, we will be loading the Human Faces dataset available on kaggle. This is what I used for training GANs from scratch on custom image data.
Procuring Kaggle API key
Get your Kaggle user name and API key. To create a key:
Step 1: Login to Azure ML studio and create a new notebook and select a compute instance to run the notebook.
Step 2: Open the terminal window (next to the magnifying glass icon for searching file names)
Step 3: Create a virtual environment using conda
In the terminal, type the following to create a new environment called newenvtf
.
conda create -y --name newenvtf
Step 4: Activate the environment. Again, in the terminal
conda activate newenvtf
You would notice the prompt in terminal changes to (newenvtf)
after running the above command:
Disclaimer: I was tempted to write this article mainly because I was unable to find many tutorials that demonstrate how to use AutoKeras with self-collected custom datasets (i.e. datasets other than popular deep learning datasets like MNIST, ImageNet, or CIFAR-10). Additionally, with the latest version of TF rolled out, many functions (used in existing tutorials) are now obsolete and require an update.
1 line of code for people in a hurry:
ImageClassifier(max_trials = 200).fit(x = X, y = y, epochs = 3, validation_split = 0.2)
In our previous articles on Deep Learning for Beginners, we learned how to build an image classification model using Pytorch and Tensorflow. For both models, I intentionally avoided going into details of the hyperparameter optimization process or tinkering with the complexity of the network architecture, both of which are useful for improving the accuracy of models. …
I love Keras, there I said it! However…
As an applied data scientist, nothing gives me more pleasure than quickly whipping up a functional neural network with as little as three lines of code! However, as I have begun to delve deeper into the dark web of neural nets, I would like to accept the fact that Pytorch does allow you a much larger control over the architecture of your network.
Given that most of us are pretty comfortable with Keras (if not, see here for a warm intro to Keras), learning to create a similar network in Pytorch (whilst learning Pytorch basics) isn’t challenging at all. …
Welcome to Part 2 of the Neural Network series! In Part 1, we worked our way through an Artificial Neural Network (ANNs) using the Keras API. We talked about Sequential network architecture, activation functions, hidden layers, neurons, etc. and finally wrapped it all up in an end-to-end example that predicted whether loan application would be approved or rejected.
In this tutorial, we will be learning how to create a Convolutional Neural Network (CNN) using the Keras API. To make it more intuitive, I will be explaining what each layer of this network does and provide tips and tricks to ease your deep learning journey. Our aim in this tutorial is to build a basic CNN that can classify images of chest Xrays and establish if it is normal or has pneumonia. …
If you are asking, “Should I use keras OR tensorflow?”, you are asking the wrong question.
When I first started my deep-learning journey, I kept thinking these two are completely separate entities. Well, as of mid-2017, they are not! Keras, a neural network API, is now fully integrated within TensorFlow. What does that mean?
It means you have a choice between using the high-level Keras API, or the low-level TensorFlow API. High-level APIs provide more functionality within a single command and are easier to use (in comparison with low-level APIs), which makes them usable even for non-tech people. …