32,36 €
Implement TensorFlow's offerings such as TensorBoard, TensorFlow.js, TensorFlow Probability, and TensorFlow Lite to build smart automation projects
Key Features
Book Description
TensorFlow has transformed the way machine learning is perceived. TensorFlow Machine Learning Projects teaches you how to exploit the benefits—simplicity, efficiency, and flexibility—of using TensorFlow in various real-world projects. With the help of this book, you'll not only learn how to build advanced projects using different datasets but also be able to tackle common challenges using a range of libraries from the TensorFlow ecosystem.
To start with, you'll get to grips with using TensorFlow for machine learning projects; you'll explore a wide range of projects using TensorForest and TensorBoard for detecting exoplanets, TensorFlow.js for sentiment analysis, and TensorFlow Lite for digit classification.
As you make your way through the book, you'll build projects in various real-world domains, incorporating natural language processing (NLP), the Gaussian process, autoencoders, recommender systems, and Bayesian neural networks, along with trending areas such as Generative Adversarial Networks (GANs), capsule networks, and reinforcement learning. You'll learn how to use the TensorFlow on Spark API and GPU-accelerated computing with TensorFlow to detect objects, followed by how to train and develop a recurrent neural network (RNN) model to generate book scripts.
By the end of this book, you'll have gained the required expertise to build full-fledged machine learning projects at work.
What you will learn
Who this book is for
TensorFlow Machine Learning Projects is for you if you are a data analyst, data scientist, machine learning professional, or deep learning enthusiast with basic knowledge of TensorFlow. This book is also for you if you want to build end-to-end projects in the machine learning domain using supervised, unsupervised, and reinforcement learning techniques
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 311
Veröffentlichungsjahr: 2018
Copyright © 2018 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Commissioning Editor: Sunith ShettyAcquisition Editor: Nelson MorrisContent Development Editor: Rhea HenriquesTechnical Editor: Dinesh ChaudharyCopy Editor: Safis EditingProject Coordinator: Manthan PatelProofreader: Safis EditingIndexer: Pratik ShirodkarGraphics: Jisha ChirayilProduction Coordinator: Nilesh Mohite
First published: November 2018
Production reference: 1301118
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.
ISBN 978-1-78913-221-2
www.packtpub.com
– Ankit Jain
– Armando Fandango
– Amita Kapoor
Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.
Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
Improve your learning with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.
Ankit Jain currently works as a senior research scientist at Uber AI Labs, the machine learning research arm of Uber. His work primarily involves the application of deep learning methods to a variety of Uber's problems, ranging from forecasting and food delivery to self-driving cars. Previously, he has worked in a variety of data science roles at the Bank of America, Facebook, and other start-ups. He has been a featured speaker at many of the top AI conferences and universities, including UC Berkeley, O'Reilly AI conference, and others. He has a keen interest in teaching and has mentored over 500 students in AI through various start-ups and bootcamps. He completed his MS at UC Berkeley and his BS at IIT Bombay (India).
Armando Fandango creates AI empowered products by leveraging his expertise in deep learning, machine learning, distributed computing, and computational methods and has provided thought leadership roles as Chief Data Scientist and Director at startups and large enterprises. He has been advising high-tech AI-based startups. Armando has authored books titled Python Data Analysis - Second Edition and Mastering TensorFlow. He has also published research in international journals and conferences.
Amita Kapoor is an Associate Professor at the Department of Electronics, SRCASW, University of Delhi. She has been teaching neural networks for twenty years. During her PhD, she was awarded the prestigious DAAD fellowship, which enabled her to pursue part of her research work at the Karlsruhe Institute of Technology, Germany. She was awarded the Best Presentation Award at the International Conference on Photonics 2008. Being a member of the ACM, IEEE, INNS, and ISBS, she has published more than 40 papers in international journals and conferences. Her research areas include machine learning, AI, neural networks, robotics, and Buddhism and ethics in AI. She has co-authored the book, TensorFlow 1.x Deep Learning Cookbook, by Packt Publishing.
Sujit Pal is a technology research director at Elsevier Labs, an advanced technology group within the Reed-Elsevier Group of companies. His areas of interests include semantic searching, natural language processing, machine learning, and deep learning. At Elsevier, he has worked on several initiatives involving search quality measurement and improvement, image classification and duplicate detection, and annotation and ontology development for medical and scientific corpora. He has co-authored a book on deep learning with Antonio Gulli and writes about technology on his blog, Salmon Run.
Meng-Chieh Ling has a PhD in theoretical physics from the Karlsruhe Institute of Technology. After his PhD, he joined The Data Incubator Reply in Munich, and later became an intern at AGT International in Darmstadt. Six months later, he was promoted to senior data scientist and is now working in the field of entertainment.
If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.
Title Page
Copyright and Credits
TensorFlow Machine Learning Projects
Dedication
About Packt
Why subscribe?
Packt.com
Contributors
About the authors
About the reviewers
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the example code files
Download the color images
Conventions used
Get in touch
Reviews
Overview of TensorFlow and Machine Learning
What is TensorFlow?
The TensorFlow core
Tensors
Constants
Operations
Placeholders
Tensors from Python objects
Variables
Tensors generated from library functions
Obtaining variables with the tf.get_variable()
Computation graph
The order of execution and lazy loading
Executing graphs across compute devices – CPU and GPGPU
Placing graph nodes on specific compute devices
Simple placement
Dynamic placement
Soft placement
GPU memory handling
Multiple graphs
Machine learning, classification, and logistic regression
Machine learning
Classification
Logistic regression for binary classification
Logistic regression for multiclass classification
Logistic regression with TensorFlow
Logistic regression with Keras
Summary
Questions
Further reading
Using Machine Learning to Detect Exoplanets in Outer Space
What is a decision tree?
Why do we need ensembles?
Decision tree-based ensemble methods
Random forests
Gradient boosting
Decision tree-based ensembles in TensorFlow
TensorForest Estimator
TensorFlow boosted trees estimator
Detecting exoplanets in outer space
Building a TFBT model for exoplanet detection
Summary
Questions
Further reading
Sentiment Analysis in Your Browser Using TensorFlow.js
Understanding TensorFlow.js
Understanding Adam Optimization
Understanding categorical cross entropy loss
Understanding word embeddings
Building the sentiment analysis model
Pre-processing data
Building the model
Running the model on a browser using TensorFlow.js
Summary
Questions
Digit Classification Using TensorFlow Lite
What is TensorFlow Lite?
Classification Model Evaluation Metrics
Classifying digits using TensorFlow Lite
Pre-processing data and defining the model
Converting TensorFlow model to TensorFlow Lite
Summary
Questions
Speech to Text and Topic Extraction Using NLP
Speech-to-text frameworks and toolkits
Google Speech Commands Dataset
Neural network architecture
Feature extraction module
Deep neural network module
Training the model
Summary
Questions
Further reading
Predicting Stock Prices using Gaussian Process Regression
Understanding Bayes' rule
 Introducing Bayesian inference
Introducing Gaussian processes
Choosing kernels in GPs
Choosing the hyper parameters of a kernel
Applying GPs to stock market prediction
Creating a stock price prediction model
Understanding the results obtained
Summary
Questions
Credit Card Fraud Detection using Autoencoders
Understanding auto-encoders
Building a fraud detection model
Defining and training a fraud detection model 
Testing a fraud detection model
Summary
Questions
Generating Uncertainty in Traffic Signs Classifier Using Bayesian Neural Networks
Understanding Bayesian deep learning
Bayes' rule in neural networks
Understanding TensorFlow probability, variational inference, and Monte Carlo methods
Building a Bayesian neural network
Defining, training, and testing the model
Summary
Questions
Generating Matching Shoe Bags from Shoe Images Using DiscoGANs
Understanding generative models
Training GANs
Applications
Challenges
Understanding DiscoGANs
Fundamental units of a DiscoGAN
DiscoGAN modeling
Building a DiscoGAN model
Summary
Questions
Classifying Clothing Images using Capsule Networks
Understanding the importance of capsule networks
Understanding capsules
How do capsules work?
The dynamic routing algorithm
CapsNet for classifying Fashion MNIST images
CapsNet implementation
Understanding the encoder
Understanding the decoder
Defining the loss function
Training and testing the model
Reconstructing sample images
Limitations of capsule networks
Summary
Making Quality Product Recommendations Using TensorFlow
Recommendation systems
Content-based filtering
Advantages of content-based filtering algorithms
Disadvantages of content-based filtering algorithms
Collaborative filtering
Hybrid systems
Matrix factorization
Introducing the Retailrocket dataset
Exploring the Retailrocket dataset
Pre-processing the data
The matrix factorization model for Retailrocket recommendations
The neural network model for Retailrocket recommendations
Summary
Questions
Further reading
Object Detection at a Large Scale with TensorFlow
Introducing Apache Spark
Understanding distributed TensorFlow
Deep learning through distributed TensorFlow
Learning about TensorFlowOnSpark
Understanding the architecture of TensorFlowOnSpark 
Deep delving inside the TFoS API
Handwritten digits using TFoS
Object detection using TensorFlowOnSpark and Sparkdl
Transfer learning
Understanding the Sparkdl interface 
Building an object detection model
Summary
Generating Book Scripts Using LSTMs
Understanding recurrent neural networks
Pre-processing the data
Defining the model
Training the model
Defining and training a text-generating model
Generating book scripts
Summary
Questions
Playing Pacman Using Deep Reinforcement Learning
Reinforcement learning
Reinforcement learning versus supervised and unsupervised learning
Components of Reinforcement Learning
OpenAI Gym 
Creating a Pacman game in OpenAI Gym 
DQN for deep reinforcement learning
Applying DQN to a game
Summary
Further Reading
What is Next?
Implementing TensorFlow in production
Understanding TensorFlow Hub
TensorFlow Serving
TensorFlow Extended
Recommendations for building AI applications
Limitations of deep learning
AI applications in industries
Ethical considerations in AI
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
TensorFlow has transformed the way machine learning is perceived. TensorFlow Machine Learning Projects teaches you how to exploit the benefits—simplicity, efficiency, and flexibility—of using TensorFlow in various real-world projects. With the help of this book, you'll not only learn how to build advanced projects using different datasets, but also be able to tackle common challenges using a range of libraries from the TensorFlow ecosystem.
To begin with, you'll get to grips with using TensorFlow for machine learning projects. You'll explore a wide range of projects using TensorForest and TensorBoard for detecting exoplanets, TensorFlow.js for sentiment analysis, and TensorFlow Lite for digit classification.
As you make your way through the book, you'll build projects in various real-world domains incorporating natural language processing (NLP), the Gaussian process, autoencoders, recommender systems, and Bayesian neural networks, along with trending areas such as generative adversarial networks (GANs), capsule networks, and reinforcement learning. You'll learn to use TensorFlow with the Spark API and explore GPU-accelerated computing with TensorFlow in order to detect objects, followed by understanding how to train and develop a recurrent neural network (RNN) model to generate book scripts.
By the end of this book, you'll have gained the required expertise to build full-fledged machine learning projects at work.
TensorFlow Machine Learning Projects is for you if you are a data analyst, data scientist, machine learning professional, or deep learning enthusiast with a basic knowledge of TensorFlow. This book is also for you if you want to build end-to-end projects in the machine learning domain using supervised, unsupervised, and reinforcement learning techniques.
Chapter 1, Overview of TensorFlow and Machine Learning, explains the basics of TensorFlow and has you build a machine learning model using logistic regression to classify hand-written digits.
Chapter 2, Using Machine Learning to Detect Exoplanets in Outer Space, covers how to detect exoplanets in outer space using ensemble methods that are based on decision trees.
Chapter 3, Sentiment Analysis in Your Browser Using TensorFlow.js, explains how to train and build a model on your web browser using TensorFlow.js. We will build a sentiment analysis model using a movie reviews dataset and deploy it to your web browser for making predictions.
Chapter 4, Digit Classification Using TensorFlow Lite, focuses on building a deep learning model for classifying hand-written digits and converting them into a mobile-friendly format using TensorFlow Lite. We will also learn about the architecture of TensorFlow Lite and how to use TensorBoard for visualizing neural networks.
Chapter 5, Speech to Text and Topic Extraction Using NLP, focuses on learning about various options for speech-to-text and pre-built models by Google in TensorFlow using the Google Speech Command dataset.
Chapter 6, Predicting Stock Prices using Gaussian Process Regression, explains a popular forecasting model called a Gaussian process in Bayesian statistics. We use Gaussian processes from a GpFlow library built on top of TensorFlow to develop a stock price prediction model.
Chapter 7, Credit Card Fraud Detection Using Autoencoders, introduces a dimensionality reduction technique called autoencoders. We identify fraudulent transactions in a credit card dataset by building autoencoders using TensorFlow and Keras.
Chapter 8, Generating Uncertainty in Traffic Signs Classifier using Bayesian Neural Networks, explains Bayesian neural networks, which help us to quantify the uncertainty in predictions. We will build a Bayesian neural network using TensorFlow to classify German traffic signs.
Chapter 9, Generating Matching Shoe Bags from Shoe Images Using DiscoGANs, introduces a new type of GAN known as Discovery GANs (DiscoGANs). We understand how its architecture differs from standard GANs and how it can be used in style transfer problems. Finally, we build a DiscoGAN model in TensorFlow to generate matching shoe bags from shoe images, and vice versa.
Chapter 10, Classifying Clothing Images Using Capsule Networks, implements a very recent image classification model—Capsule Networks. We get to understand its architecture and explain the nuances of its implementation in TensorFlow. We use the Fashion MNIST dataset to classify clothing images using this model.
Chapter 11, Making Quality Product Recommendations Using TensorFlow, covers techniques such as matrix factorization (SVD++), learning to rank, and convolutional neural network variations for recommendation tasks with TensorFlow.
Chapter 12, Object Detection at a Large Scale with TensorFlow, explores Yahoo's TensorFlowOnSpark framework for distributed deep learning on Spark clusters. Then, we will apply TensorFlowOnSpark to a large-scale dataset of images and train the network to detect objects.
Chapter 13, Generating Book Scripts Using LSTMs, explains how LSTMs are useful in generating new text. We use a book script from one of Packt's published books to bsuild an LSTM-based deep learning model that can generate book scripts on its own.
Chapter 14, Playing Pacman Using Deep Reinforcement Learning, explains the utilization of reinforcement learning for training a model to play Pacman, teaching you about reinforcement learning in the process.
Chapter 15, What is Next?, introduces the other components of the TensorFlow ecosystem that are useful for deploying the models in production. We will also learn about various applications of AI across industries, the limitations of deep learning, and ethics in AI.
To get the most out of this book, download the book code from the GitHub repository and practice with the code in Jupyter Notebooks. Also, practice modifying the implementations already provided by the authors.
You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.
You can download the code files by following these steps:
Log in or register at
www.packt.com
.
Select the
SUPPORT
tab.
Click on
Code Downloads & Errata
.
Enter the name of the book in the
Search
box and follow the onscreen instructions.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR/7-Zip for Windows
Zipeg/iZip/UnRarX for Mac
7-Zip/PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/TensorFlow-Machine-Learning-Projects. In case there's an update to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://www.packtpub.com/sites/default/files/downloads/9781789132212_ColorImages.pdf.
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "By defining placeholders and passing the values to session.run()."
A block of code is set as follows:
tf.constant( value, dtype=None, shape=None, name='const_name', verify_shape=False )
Any command-line input or output is written as follows:
const1 (x): Tensor("x:0", shape=(), dtype=int32)
const2 (y): Tensor("y:0", shape=(), dtype=float32)
const3 (z): Tensor("z:0", shape=(), dtype=float16)
Bold: Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this. Here is an example: "Type a review into the box provided and click Submit to see the model's predicted score."
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packt.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in, and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.
Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
TensorFlow is a popular library for implementing machine learning-based solutions. It includes a low-level API known as TensorFlow core and many high-level APIs, including two of the most popular ones, known as TensorFlow Estimators and Keras. In this chapter, we will learn about the basics of TensorFlow and build a machine learning model using logistic regression to classify handwritten digits as an example.
We will cover the following topics in this chapter:
TensorFlow core:
Tensors in TensorFlow core
Constants
Placeholders
Operations
Tensors from Python objects
Variables
Tensors from library functions
Computation graphs:
Lazy loading and execution order
Graphs on multiple devices
– C
PU and GPGPU
Working with multiple graphs
Machine learning, classification, and logistic regression
Logistic regression examples in TensorFlow
Logistic regression examples in Keras
TensorFlow is a popular open source library that's used for implementing machine learning and deep learning. It was initially built at Google for internal consumption and was released publicly on November 9, 2015. Since then, TensorFlow has been extensively used to develop machine learning and deep learning models in several business domains.
To use TensorFlow in our projects, we need to learn how to program using the TensorFlow API. TensorFlow has multiple APIs that can be used to interact with the library. The TensorFlow APIs are divided into two levels:
Low-level API
: The API known as TensorFlow core provides fine-grained lower level functionality. Because of this, this low-level API offers complete control while being used on models. We will cover TensorFlow core in this chapter.
High-level API
: These APIs
provide
high-level functionalities that have been built on TensorFlow core and are comparatively easier to learn and implement. Some high-level APIs include Estimators, Keras, TFLearn, TFSlim, and Sonnet. We will also cover Keras in this chapter.
The TensorFlow core is the lower-level API on which the higher-level TensorFlow modules are built. In this section, we will go over a quick overview of TensorFlow core and learn about the basic elements of TensorFlow.
Tensors are the basic components in TensorFlow. A tensor is a multidimensional collection of data elements. It is generally identified by shape, type, and rank. Rank refers to the number of dimensions of a tensor, while shape refers to the size of each dimension. You may have seen several examples of tensors before, such as in a zero-dimensional collection (also known as a scalar), a one-dimensional collection (also known as a vector), and a two-dimensional collection (also known as a matrix).
A scalar value is a tensor of rank 0 and shape []. A vector, or a one-dimensional array, is a tensor of rank 1 and shape [number_of_columns] or [number_of_rows]. A matrix, or a two-dimensional array, is a tensor of rank 2 and shape [number_of_rows, number_of_columns]. A three-dimensional array is a tensor of rank 3. In the same way, an n-dimensional array is a tensor of rank n.
A tensor can store data of one type in all of its dimensions, and the data type of a tensor is the same as the data type of its elements.
The following are the most commonly used data types in TensorFlow:
TensorFlow Python API data type
Description
tf.float16
16-bit floating point (half-precision)
tf.float32
32-bit floating point (
single-precision)
tf.float64
64-bit floating point (
double-precision)
tf.int8
8-bit integer (
signed)
tf.int16
16-bit integer
(
signed)
tf.int32
32-bit integer
(
signed)
tf.int64
64-bit integer
(
signed)
Tensors can be created in the following ways:
By defining constants, operations, and variables, and passing the values to their constructor
By defining placeholders and passing the values to
session.run()
By converting Python objects, such as scalar values, lists, NumPy arrays, and pandas DataFrames, with the
tf.convert_to_tensor()
function
Let's explore different ways of creating Tensors.
The constant valued tensors are created using the tf.constant() function, and has the following definition:
tf.constant( value, dtype=None, shape=None, name='const_name', verify_shape=False )
Let's create some constants with the following code:
const1=tf.constant(34,name='x1')const2=tf.constant(59.0,name='y1')const3=tf.constant(32.0,dtype=tf.float16,name='z1')
Let's take a look at the preceding code in detail:
The first line of code defines a constant tensor,
const1
, stores a value of
34
, and names it
x1
.
The second line of code defines a constant tensor,
const2
, stores a value of
59.0
, and names it
y1
.
The third line of code
defines the data type as
tf.float16
for
const3
.
Use the
dtype
parameter or place the data type as the second argument to denote the data type.
Let's print the constants const1, const2, and const3:
print('const1 (x): ',const1)print('const2 (y): ',const2)print('const3 (z): ',const3)
When we print these constants, we get the following output:
const1 (x): Tensor("x:0", shape=(), dtype=int32)
const2 (y): Tensor("y:0", shape=(), dtype=float32)
const3 (z): Tensor("z:0", shape=(), dtype=float16)
To print the values of these constants, we can execute them in a TensorFlow session with the tfs.run() command:
print('run([const1,const2,c3]) : ',tfs.run([const1,const2,const3]))
We will see the following output:
run([const1,const2,const3]) : [34, 59.0, 32.0]
TensorFlow provides various functions to generate tensors with pre-populated values. The generated values from these functions can be stored in a constant or variable tensor. Such generated values can also be provided to the tensor constructor at the time of initialization.
As an example, let's generate a 1-D tensor that's been pre-populated with 100 zeros:
a=tf.zeros((100,))print(tfs.run(a))
Some of the TensorFlow library functions that populate these tensors with different values at the time of their definition are listed as follows:
Populating all of the elements of a tensor with similar values:
tf.ones_like()
,
tf.ones()
,
tf.fill()
,
tf.zeros()
, and
tf.zeros_like()
Populating tensors with sequences:
tf.range()
,and
tf.lin_space()
Populating tensors with a probability distribution:
tf.random_uniform()
,
tf.random_normal()
,
tf.random_gamma()
,and
tf.truncated_normal()
If a variable is defined with a name that has already been used for another variable, then an exception is thrown by TensorFlow. Thetf.get_variable() function makes it convenient and safe to create a variable in place of using thetf.Variable() function. The tf.get_variable() function