Introduction and installation

Tensorflow is a free library built by Google for machine learning. In these few posts I'll try give a short introduction to tensorflow and how we can use it to solve many machine learning problems.

How to install tensorflow?There’s a nice tutorial on Google blog: install tensorflow. Once we have tensorflow installed we can start building our first program.

Tensorflow basic command

Before we start working with tensorflow we should import a library:

import tensorflow as tf

Tensorflow is a library working on graph so if we want to use some commands we should add them to the graph. Because of using graph we can really easily show which command was used.

graph = tf.get_default_graph() 
for op in graph.get_operations(): 
    print op.name 

Wondering why we didn’t see anything after running this operation ? That’s because we didn’t add anything to the graph. So let’s try and add a constant.

const = tf.constant(0.0, dtype=tf.float64, name='constant')

After runnning this command:

name: "constant"
op: "Const"
attr {
  key: "dtype"
  value {
    type: DT_DOUBLE
  }
}
attr {
  key: "value"
  value {
    tensor {
      dtype: DT_DOUBLE
      tensor_shape {
      }
      double_val: 0.0
    }
  }
}

This shows that we’ve run one operation with a name 'constant'.

Now we can print the constant value:

sess = tf.Session()
print sess.run(const)

Let's try and add variable:

var = tf.Variable(5.0, dtype=tf.float64, name='var')
sess.run(tf.global_variables_initializer())
print sess.run(variable)

If we need examples we can perform an action by adding:

y = tf.add(const, var)
print sess.run(y)

another way:

z = tf.Variable(1.3, dtype=tf.float64)
t = z + var
print sess.run(z)
print sess.run(t)

Now you can play with math in tensorflow.

We know how to define constant and variable in tensorflow - lets try and write a linear regression.

Linear regression

We will try to find line which fits only two points:

x = tf.constant(1.0, dtype=tf.float64, name='x')
y = tf.constant(5.0, dtype=tf.float64, name='y')

Because problem is linear we need to define a, b as a variables - we will change this in our learning process.

a = tf.Variable(0.0, dtype=tf.float64, name='a')
b = tf.Variable(0.0, dtype=tf.float64, name='b')

and now we can define our objective function

y_ = a*x + b

we need to define loss function

loss = (y-y_)**2

and we need to define optimization methods

optim = tf.train.GradientDescentOptimizer(learning_rate=0.025)

than we need to compute the gradient

grads_and_vars = optim.compute_gradients(loss)

and to see results

sess = tf.Session()
sess.run(tf.global_variables_initializer())
print sess.run(grads_and_vars[0][0])

as a result we can see the counting gradients which should equal -10.
You should remember that the function counting derivative returns a tuple. Full documentation of thiscan be found here.

Now to teach our model we need to use the gradient - add it to the optimization function.

sess.run(optim.apply_gradients(grads_and_vars))

and we can see the changes

sess.run(a)
sess.run(b)

Of course to teach our model we will need to repeat this a few times - code below

import tensorflow as tf
x = tf.constant(1.0,  name='input')
a = tf.Variable(0.8,  name='weight')
b = tf.constant(1.0,  name='bias')
y = tf.add(tf.mul(a, x), b, name='output')
y_ = tf.constant(5.0)
loss = (y - y_)**2
optim = tf.train.GradientDescentOptimizer(learning_rate=0.025)
grads_and_vars = optim.compute_gradients(loss)
model = tf.global_variables_initializer()
sess = tf.Session()
sess.run(model)
for i in range(50):
    print sess.run(grads_and_vars[0][0])
    sess.run(optim.apply_gradients(grads_and_vars))
    print sess.run(a)
    print sess.run(loss)

We can also build a model without computing gradient manually

import tensorflow as tf
x = tf.constant(1.0,  name='input')
a = tf.Variable(0.8,  name='weight')
b = tf.constant(1.0,  name='bias')
y = tf.add(tf.mul(a, x), b, name='output')
y_ = tf.constant(5.0)
loss = (y - y_)**2
train_step = tf.train.GradientDescentOptimizer(0.025).minimize(loss)
model = tf.global_variables_initializer()
sess = tf.Session()
sess.run(model)
for i in range(50):
    print sess.run(grads_and_vars[0][0])
    sess.run(optim.apply_gradients(grads_and_vars))
    print sess.run(a)
    print sess.run(loss)

This code is only for one set data - the problem is how to build this model for more than one set of data? This will be explained in the next post.