This course lays a solid foundation to TensorFlow, a leading machine learning library from Google AI team. You’ll see how TensorFlow can create a range of machine learning models, custom deep neural networks to transfer learning models built by big tech giants. You will learn how to use and reuse tensorflow effectively and apply on industry relevant problems.
Introducting Tensorflow
Learning goals of this course are discussed. At the end of this course, you will be able to use tensorflow for practical business applications.
In this lecture, you will learn about some practical applications where Tensorflow can be (is already being) used. Like self face detection in images, driving cars, Amazon Alexa etc..
In this lecture, you will learn why Tensorflow is developed by Google Brain team. The key takeaway here is ... Tensorflow is used as interface and implementation for machine learning algorithm.
In this lecture, you will learn how Tensorflow supports the interface part, and how Tensorflow supports the implementation part. The key takeaway from this lecture is Tensorflow's capability to scaling a small scale (implemented in your laptop) to an full scale deployment in a seamless way.
In this lecture, you will learn about the options that are available for Tensorflow 'Language Interface' and Tensorflow 'Execution Environment'
In this lecture, you will learn about the abstract concept 'tensor'. What is 'tensor' of dimension 0, 1, 2 etc...
At the end of this lecture, you will be able to:
- understand the concept of 'computation graph' with very simplified example
- understand the reason behind the name Tensorflow
In this lecture, mandatory and non-mandatory skills for taking this course are discussed.
Mandatory skills for this course:
- some programming experience in at least one high level langauge
- understanding of basic mathematics and statistics
- willingness to lean new concept
This lecture list down the key skills that you will develop at the end of this course.
To summarize, the key skills that you will gain are:
- Tensorflow - understanding and implementation
- Artificial Neural Network (ANN)
- Debugging and monitoring ANN models
- Transfer Learning
- Keras, TFLearn
At the end of this lecture, you will be able to:
- install Tensorflow in your machine
- verify whether the installation is successful
In this lecture, you will learn different concepts (related to linear regression) and how these concepts are implemented. In the subsequent lectures, you will learn how these implementations are done using tensorflow.
At the end of this lecture, you will be able to:
- prepare artificial data
- normalize data before modelling
At the end of this lecture, you will learn about:
- Tensorflow types
- Tensorflow operations
- computational graph for linear regression
At the end of this lecture, you will be able to:
- setup loss function in Tensorflow
- User gradient descent optimizer to minimize the loss
At the end of this lecture, you will be able to:
- set up a tensorflow session
- understand how gradient descent works and visualize it in action for linear regression
Building Neural Networks using Tensorflow
At the end of this lecture, you will know the concepts of rank and share of a tensor.
In this lecture, you will learn about tensorflow data types. Tensorflow supports standard data types like int, float, complex. You will learn about new data types like qint8, qint16, quint16 etc... These are specific to tensorflow and are called quantized integer data types.
In this lecture you will learn the differences between CPU, GPU and TPU (tensor processing unit)
At the end of this lecture you will learn about the basic methods that are applied to a tensor, like shape, reshape etc..
this is an introductory lecture on neural network, its applications, and how neural network works for recognizing images.
At the end of this lecture, you will have a clear understanding of how a single layer neural network works.
You will understand: ANN architecture, input nodes, weights, activation functions, output of ANN.
- how a linear regression problem can be cast into the neural network architecture
- the fundamental concept of making a neural network learn by adjusting the weights so that the loss is minimized
At the end of this lecture, you will have a clear understanding of:
- a 'neuron', the fundamental building block of an artificial neural network
- weights and activation function of a 'neuron'
At the end of this lecture, you will have a clear understanding of:
- how to build an ANN using simple neurons
- the concept of layers in an ANN
- input, hidden, output layers
- depth of an ANN
- This lecture is introduction to the first artificial neural network problem in this course.
- The MNIST dataset of hand written numbers are used. There is a total of 70,000 images of size 28x28 pixels.
At the end of this lecture, you will be able to:
- Set up an ANN model using Tesorflow
- Run tensorflow to estimate parameters
- Have a clear idea on how weight, bias, activation function for neural network is implemented using tensorflow
This lecture summarizes this section. In this section, the following are covered:
- Introduction to neural network and basic concepts of a neuron, input layer, output layer, hidden layer, activation function etc..
- Real example of modeling using MNIST data set.
Next section is about deep learning.
Deep Learning using Tensorflow
A brief lecture that starts with the basic questions .... why we need deep neural network, and how to deepen a neural network. Details follow in the next lecture.
- This lecture introduces the following two concepts:
- neural network are prone to overfit (convolution neural network is a solution, which is discussed later)
- images are tensors in 3-D (two in space, one in color)
At the end of this lecture, you will learn:
- how human recognizes images (with example of a human image)
- image recognition starts with high level features, and moves to details
Later you will see the similarity of human being's way of image recognition with deep neural network.
At the end of this lecture, you will learn:
- CNN - convolutional neural networks
- How CNN works on an image with practical example
- the concept of feature finding (oblique shapes, horizontal shapes etc..)
- CNN also overfits, max-pooling is a concept to overcome the overfitting problem. Max pooling also makes the computation more manageable.
At the end of this lecture, you will understand different layers of deep neural network.
At the end of this lecture, you will learn about "overfitting". This is a central concept in machine learning. Regularization is done to mitigate overfitting. Next lecture covers regularization.
At the end of this lecture, you will understand:
- the concept of max pooling
- why sigmoid activation function is not adequate, and reLU is used instead
At the end of this lecture, you will learn about the concept of "drop out". This is used to regularize the neural network against over fitting.
At the end of this lecture, you will learn some of the most important concepts of convolutional neural network (CNN). These are:
- Size of the convolution - the size and shape of the convolution kernel
- Stride of the convolution - by how much the convolution kernel should move
- Padding - how to manage the convolution at the boundaries of the image
By following this lecture, you will be able to create a deep neural network all by yourself. In particular you will learn:
- how to construct a neural network layer by layer
- how to set convolution, activation and max pooling in each layer
- how to set the loss function
- how to set the optimizer
This lecture talks about the necessity of debugging a neural network and introduces a list of tools available for debugging a neural network. In the next lecture the tools are discussed in details.
After following this lecture, you will be able to:
- create logs and summaries of a neural network training process
- assign name and name scopes to track progress
- set up a tensorboard to visualize the training progress in real time
- see accuracy and statistics related to all the weights and biases in real time
After following this lecture, you will be able to:
- go deeper into tensor board
- check every details of the computational graphs
- check distribution of weights
- check the rate of decrease in loss function and increase in accuracy for every iteration
This lecture summarizes all the topic that is been covered in this Section.
Transfer Learning using Keras and TFLearn
At the end of this lecture, you will learn:
- the concept of transfer learning
- the advantages of transfer learning
At the end of this lecture, you will learn the exact steps needed in setting up a transfer learning framework with Google inception.
At the end of this lecture, you will be able to:
- start implementing your first transfer learning models using Google Inception
- make changes and update the key arguments of the transfer learning code provided by Google Tensorflow team
- restructure and prepare data for transfer learning
- diagnose the learning and testing
At the end of this lecture, you will be able to :
- use transfer learning for creating your own image classifier
- debug transfer learning models created using Google inception
Summary of transfer learning section. In a single sentence ... "adopt pre-trained networks for your applications".
Introduction on how tensorflow can be extended to make programming easy and with less bugs. Two important tools Keras and TFLearn are discussed in the subsequent lectures.
At the end of this lecture, you will be able to:
- install and learn Kearas library over Tensorflow
- develop ANN models by writing only a few lines in Keras
- use Keras Tensorboard to track progress of the learning phase
At the end of this lecture, you will be able to:
- develop and train neural network models with only few lines of codes in TFLearn
At the end of this lecture, you will be able to:
- understand the differences and similarities between Keras and TFLearn
- take decision on whether to use Keras or TFLearn for your applications
Summary of the section on:
- transfer learning
- abstraction over tensorflow, - Keras and TFLearn
Tensorflow Extra Resources
These make a small part of a comprehensive list of questions popularly asked with Tensorflow.