Deep Learning with Keras Tutorial – Part 1

About this series

This post is the first part of Deep Learning with Keras series. This series aims to introduce the Keras deep learning library and how to use it to train various deep learning models. We will cover topics such as regression, classification, convolution, recurrent networks, transfer learning and many others. The tutorials will be completely example driven to make sure the readers learn the concepts and how to apply them on real datasets.

In the first post, we will introduce Keras and its different components. We will know the most important features and the steps needed to define deep learning models.

What is Keras?

Keras is a deep-learning framework that provides a convenient way to define and train almost any kind of deep-learning model. It is written in Python and can be run on top of TensorFlow, CNTK, or Theano. You are free to use it in commercial projects since it is distributed under the MIT license.

What makes Keras so popular?

One of the most important characteristics of Keras is its user-friendly API. You could develop a state of art deep learning model in no time. Therefore, it is ideal for easy and fast prototyping. In addition, it supports many modern deep learning layers such as convolutional and recurrent layers. Keras layers can be added sequentially or in many different combinations in a very easy way. Regarding hardware, you can run Keras on CPUs and GPUs and switch between them in a very easy way.

Installing Keras

The installation process is very easy. First, we need to install the backend where all the calculations take place (We will choose TensorFlow). Then we install Keras.

In your command line type:

$ pip install tensorflow
$ pip install keras

It is as simple as this. Let us test the implementation:

$ python -c 'import keras; print(keras.__version__)'

You should now see the installed version of Keras.

Keras Workflow

Keras Workflow

In order to build a deep learning project in Keras you normally would follow the following workflow:

  1. Define your training data
  2. Define your network
  3. Configure the learning process by choosing:
    1. Loss function
    1. Optimizer
    1. Metrics
  4. Iterate over the training data and start fitting your model

Keras Models

The core data structure of Keras is the Model class. It is found under keras.models that gives you two ways to define models: The Sequential class and the Model class. The Sequential class builds the network layer by layer in a sequential order. The Model class allows for more complex network structures which we will see in future posts.

Model Lifecycle

A Keras model follows the following lifecycle:

  1. Model creation
    1. Define a model using the Sequential or Model class
    1. Add the layers
  2. Configure the model by specifying the loss, optimizer and metrics. This is done by calling the compile method.
  3. Train the model by calling the fit method.
  4. By then you will have a trained model that you could use for evaluation on testing data or prediction on new data.
Image result for keras workflow
Keras Model Lifecycle

Core Layers

Keras supports many layers for building our neural network. They are accessible from keras.layers and the following shows the most basic classes we are going to use:

  • Dense: is the standard layer of fully connected neurons to the previous layer. It Implements the operation output = activation(X * W + bias)
  • Activation: applies an activation function to an output
  • Dropout: applies dropout to the input. Basically, it works by randomly deactivation a set of neurons in a given layer according to a predefined probability rate. Dropout is used to prevent overfitting
  • Conv2D: Applies a 2D convolution to train a set of kernels mainly on image datasets
  • Flatten: Flattens the input into 1D matrix. Mainly used after feature extraction in Convolutional neural networks.

Don’t be intimidated by some of the layers, we will learn them one by one in future posts.

Losses and Optimizers

After defining a model, we need to select a loss function and an optimizer. The optimizer’s job is to find the best model parameters that minimizes the loss function.

Available optimizers: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

Available loss functions: mean absolute error, mean absolute percentage error, mean squared. logarithmic error, squared hinge, hinge, categorical hinge, logcosh, categorical crossentropy, sparse categorical crossentropy, binary crossentropy, kullback leibler divergence, poisson, cosine proximity.

Explaining how each optimizer and loss function work is out of the scope this series, if you want to know more about them please visit Keras official documentation for losses and optimizers.

Keras Utils

Keras provides additional utility functions that facilitates building and viewing models. We will mainly use them to preprocess data and viewing models. For more information about the available functions please visit the official documentation.

Conclusion

This post was a simple introduction to Keras. We introduced the framework, learned about the important classes, the standard workflow and the model lifecycle. In the next post, we will learn how to use Keras to train a linear regression model.


Note: This is a guest post, and opinion in this article is of the guest writer. If you have any issues with any of the articles posted at www.marktechpost.com please contact at asif@marktechpost.co

I am a Data Scientist specialized in Deep Learning, Machine Learning and Big Data (Storage, Processing and Analysis). I have a strong research and professional background with a Ph.D. degree in Computer Science from Université Paris Saclay and VEDECOM institute. I practice my skills through R&D, consultancy and by giving data science training.

🚀 The end of project management by humans (Sponsored)