Basic Tensorflow Tutorial
Overview Of Basic Operation in Open source software library TensorFlow:
Introduction:
TensorFlow is an open-source software library dataflow programming across a different task.It is math library where it is mostly used in machine learning such as neural network. Google made it and publish it in 2009. Nowadays, it is used in many machine learning application. Some of the application are:
- RankBrain
- SmartReply(Neural Machine Translator i.e Google Translate)
- Google Brain
- Self Driving Car and so on.
As, Today due to increase of the demands of the people and Rapid development of the Technology traditional Programming learning approach is not sufficient to develop the system i.e Robustness, Automatic, User-Friendly and so on.
So, To accomplish such task and features we need modern Machine Learning approach which is provide by Tensorflow. Let's view with an image as:
If you are looking for the free online course for TensorFlow then best one for you is:
Cogintive Class: https://cognitiveclass.ai/courses/deep-learning-tensorflow/
For this more tutorial, Please visit my github page: https://github.com/sushant097/Tensorflow-Tutorial
For Detail Information about please visit : https://www.tensorflow.org/
BASIC TENSORFLOW
Variables:
TensorFlow is a way of representing computation without actually performing it until asked. In this sense, it is a form of lazy computing, and it allows for some great improvements to the running of code:
- Faster computation of complex variables
- Distributed computation across multiple systems, including GPUs.
- Reduced redundency in some computations
Let’s have a look at this in action. First, a very basic python script:
x=
35
y=
x
+
5
print(y)
This script basically just says “create a variable x with value 35, set the value of a new variable y to that plus 5, which is currently 40, and print it out”. The value 40 will print out when you run this program. If you aren’t familiar with python, create a new text file called
basic_script.py
, and copy that code in. Save it on your computer and run it with:python basic_script.py
Note that the path (i.e.
basic_script.py
) must reference the file, so if it is in the Code
folder, you use:python Code/basic_script.py
If that is working, let’s convert it to a TensorFlow equivalent.
x=
tf.constant(35,
name='x')
y=
tf.Variable(x
+
5,
name='y')
print(y)
After running this, you’ll get quite a funny output, something like
<tensorflow.python.ops.variables.Variable object at 0x7f074bfd9ef0>
. This is clearly not the value 40.
The reason why, is that our program actually does something quite different to the previous one. The code here does the following:
- Import the tensorflow module and call it tf
- Create a constant value called x, and give it the numerical value 35
- Create a Variable called y, and define it as being the equation x + 5
- Print out the equation object for y
The subtle difference is that y isn’t given “the current value of x + 5” as in our previous program. Instead, it is effectively an equation that means “when this variable is computed, take the value of x (as it is then) and add 5 to it”. The computation of the value of y is never actually performed in the above program.
Let’s fix that:
import tensorflow as tf
x = tf.constant(35, name='x')
y = tf.Variable(x + 5, name='y')
model = tf.global_variables_initializer()
with tf.Session() as session:
session.run(model)
print(session.run(y))
We have removed the print(y) statement, and instead we have code that creates a session, and actually computes the value of y. This is quite a bit of boilerplate, but it works like this:
- Import the tensorflow module and call it tf
- Create a constant value called x, and give it the numerical value 35
- Create a Variable called y, and define it as being the equation x + 5
- Initialize the variables with tf.global_variables_initializer() (we will go into more detail on this)
- Create a session for computing the values
- Run the model created in 4
- Run just the variable y and print out its current value
The step 4 above is where some magic happens. In this step, a graph is created of the dependencies between the variables. In this case, the variable y depends on the variable x, and that value is transformed by adding 5 to it. Keep in mind that this value isn’t computed until step 7, as up until then, only equations and relations are computed.
Exercise:
1) Generate a NumPy array of 10,000 random numbers (called x) and create a Variable storing the equation : y=5x2−3x+15
You can generate the NumPy array using the following code:
import numpy as np
data = np.random.randint(1000, size=10000)
This data variable can then be used in place of the list from question 1 above. As a general rule, NumPy should be used for larger lists/arrays of numbers, as it is significantly more memory efficient and faster to compute on than lists. It also provides a significant number of functions (such as computing the mean) that aren’t normally available to lists.
2)
Use TensorBoard to visualise the graph for some of these examples. To run TensorBoard, use the command: tensorboard --logdir=path/to/log-directory
import tensorflow as tf
x = tf.constant(35, name='x')
print(x)
y = tf.Variable(x + 5, name='y')
with tf.Session() as session:
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("/tmp/basic", session.graph)
model = tf.global_variables_initializer()
session.run(model)
print(session.run(y)
Please check this vedio at for more details: Click Here
Dimensionality and Broadcasting:
When we operate on arrays of different dimensionality, they can combine in different ways, either elementwise or through broadcasting.
Let’s start from scratch and build up to more complex examples. In the below example, we have a TensorFlow constant representing a single number.
import tensorflow as tf
a = tf.constant(3, name='a')
with tf.Session() as session:
print(session.run(a))
Not much of a surprise there! We can also do computations, such as adding another number to it:
a = tf.constant(3, name='a')
b = tf.constant(4, name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
Let’s extend this concept to a list of numbers. To start, let’s create a list of three numbers, and then another list of numbers to it:
a = tf.constant([1, 2, 3], name='a')
b = tf.constant([4, 5, 6], name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
This is known as an elementwise operation, where the elements from each list are considered in turn, added together and then the results combined.
What happens if we just add a single number to this list?
a = tf.constant([1, 2, 3], name='a')
b = tf.constant(4, name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
Is this what you expected? This is known as a broadcasted operation. Our primary object of reference was a, which is a list of numbers, also called an array or a one-dimensional vector. Adding a single number (called a scalar) results in an broadcasted operation, where the scalar is added to each element of the list.
Now let’s look at an extension, which is a two-dimensional array, also known as a matrix. This extra dimension can be thought of as a “list of lists”. In other words, a list is a combination of scalars, and a matrix is a list of lists.
That said, how do operations on matrices work?
a = tf.constant([[1, 2, 3], [4, 5, 6]], name='a')
b = tf.constant([[1, 2, 3], [4, 5, 6]], name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
That’s elementwise. If we add a scalar, the results are fairly predictable:
a = tf.constant([[1, 2, 3], [4, 5, 6]], name='a')
b = tf.constant(100, name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
In this case, the array was broadcasted to the shape of the matrix, resulting in the array being added to each row of the matrix. Using this terminology, a matrix is a list of rows. What if we didn’t want this, and instead wanted to add b across the columns of the matrix instead?
a = tf.constant([[1, 2, 3], [4, 5, 6]], name='a')
b = tf.constant([100, 101,], name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
This didn’t work, as TensorFlow attempted to broadcast across the rows. It couldn’t do this, because the number of values in b (2) was not the same as the number of scalars in each row (3).
We can do this operation by creating a new matrix from our list instead.
a = tf.constant([[1, 2, 3], [4, 5, 6]], name='a')
b = tf.constant([[100], [101]], name='b')
add_op = a + b
with tf.Session() as session:
print(session.run(add_op))
What happened here? To understand this, let’s look at matrix shapes.
a.shape
TensorShape([Dimension(2), Dimension(3)])
b.shape
TensorShape([Dimension(2), Dimension(1)])
You can see from these two examples that a has two dimensions, the first of size 2 and the second of size 2. In other words, it has two rows, each with three scalars in it.
Our b constant also has two dimensions, two rows with one scalar in each. This is not the same as a list, nor is it the same as a matrix if one row of two scalars.
Due to the fact that the shapes match on the first dimension but not the second, the broadcasting happened across columns instead of rows.
Exercise:
1) Create a 3-dimensional matrix. What happens if you add a scalar, array or matrix to it?
2) Use tf.shape (it’s an operation) to get a constant’s shape during operation of the graph.
3) Think about use cases for higher-dimensional matrices. In other words, where might you need a 4D matrix, or even a 5D matrix? Hint: think about collections rather than single objects.
This is created by Sushant Gautam. If you have any query please ask at the Comment.
For suggestion and feedback please send me at: Sushant1234gautam@gmail.com
Any ideas and feedback are Acceptable! Thanks a lot for reading this. 😐
Please see my github account page : https://github.com/sushant097/Tensorflow-Tutorial for more details.
Comments
Post a Comment