Thursday, September 6, 2018

Solve the C++ library incompatibility problem when using matlab

I tend to mix the Python code and Matlab code together. The most convenient way is to expose the Matlab as a computation engine for Python. However Matlab comes up with itself specific version of standard C++ library which is probably incompatible with the system's one. The following way can overcome such conflict:

export LD_PRELOAD=/usr/local/matlab2016b/sys/os/glnxa64/libstdc++.so.6.0.20

Wednesday, July 18, 2018

A wrapper around batch_normalization

Usually I am using Sonnet, however recently an overlook of the document in-lined with source code made me thought there is a potential bug in the implementation. But when I turned to the implementation provided by TensorFlow, there is no better off. Lots of pitfalls here and there.

The following is a wrapper by me to demonstrate a user case of the routine, hope it will be useful. And I believe you know how to save and restore the variables, yes?

Enjoy coding no matter how frustrating.

import numpy as np
import tensorflow as tf
import sonnet as snt

from tensorflow.python.layers import normalization


class MyBatchNorm(object):
    def __init__(self):
        self._bn = normalization.BatchNormalization(axis = 1,
            epsilon = np.finfo(np.float32).eps, momentum = 0.9)

    def __call__(self, inputs, is_training = True, test_local_stats = False):
        outputs = self._bn(inputs, training = is_training)

        self._add_variable(self._bn.moving_mean)
        self._add_variable(self._bn.moving_variance)

        return outputs

    def _add_variable(self, var):
        if var not in tf.get_collection(tf.GraphKeys.MOVING_AVERAGE_VARIABLES):
            tf.add_to_collection(tf.GraphKeys.MOVING_AVERAGE_VARIABLES, var)

t = tf.truncated_normal([2, 4, 4, 2])


bn = MyBatchNorm()
bn2 = MyBatchNorm()

n = bn(t)
n2 = bn2(t)

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
    n = tf.identity(n)


with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    n_v, n2_v = sess.run([n, n2])

    print(tf.trainable_variables())
    print(tf.moving_average_variables())

Friday, July 6, 2018

A tricky error regarding multiple-GPU training

A must undergoing step for utilizing multiple GPU to train model is to average gradients computed by different GPUs. A typical error could happen when the gradient is partial available or stop_gradient is called into the graph. The error message is like this:

ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported.

If it happens, try to explicitly disable trainable property of the variables.

Monday, July 2, 2018

Dynamic Programming code in TensorFlow

Following is the code implemented in TensorFlow for dynamic programming of example 4.1 from the great book: Reinforcement Learning: An Introduction. As promised at last all the pseudo code will be implemented in TensorFlow.

Enjoy it and welcome further discussion.

import tensorflow as tf

num_iters = 1000
num_states = 16

V = [tf.get_variable("V%d" % i, [], tf.float64, initializer = tf.zeros_initializer()) for i in range(num_states)]

V0 = V[0]
V1 = -0.25 * (1 - V[0] + 1 - V[1] + 1 - V[2] + 1 - V[5])
V2 = -0.25 * (1 - V[1] + 1 - V[2] + 1 - V[3] + 1 - V[6])
V3 = -0.25 * (1 - V[2] + 1 - V[3] + 1 - V[3] + 1 - V[7])
V4 = -0.25 * (1 - V[4] + 1 - V[0] + 1 - V[5] + 1 - V[8])
V5 = -0.25 * (1 - V[4] + 1 - V[1] + 1 - V[6] + 1 - V[9])
V6 = -0.25 * (1 - V[5] + 1 - V[2] + 1 - V[7] + 1 - V[10])
V7 = -0.25 * (1 - V[6] + 1 - V[3] + 1 - V[7] + 1 - V[11])
V8 = -0.25 * (1 - V[8] + 1 - V[4] + 1 - V[9] + 1 - V[12])
V9 = -0.25 * (1 - V[8] + 1 - V[5] + 1 - V[10] + 1 - V[13])
V10 = -0.25 * (1 - V[9] + 1 - V[6] + 1 - V[11] + 1 - V[14])
V11 = -0.25 * (1 - V[10] + 1 - V[7] + 1 - V[11] + 1 - V[15])
V12 = -0.25 * (1 - V[12] + 1 - V[8] + 1 - V[13] + 1 - V[12])
V13 = -0.25 * (1 - V[12] + 1 - V[9] + 1 - V[14] + 1 - V[13])
V14 = -0.25 * (1 - V[13] + 1 - V[10] + 1 - V[15] + 1 - V[14])
V15 = V[15]


delta_lst = []
for i in range(num_states):
    verbose_op = tf.Print(V[i], [tf.round(V[i])], message = "value of V(%d) = " % i)
    delta_lst.append(tf.abs(V[i] - eval("V%d" % i)))
    with tf.control_dependencies([verbose_op]):
        V[i] = tf.assign(V[i], eval("V%d" % i))

delta = tf.reduce_max(delta_lst)

stop_op = tf.cond(tf.less(delta, 0.0001), lambda: True, lambda: False)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for i in range(num_iters):
        if sess.run(stop_op):
            print("\ncurrent iteration {}".format(i))
            break

        for j in range(num_states):
            sess.run(V[j])

Sunday, June 3, 2018

Steps to invoke PDB to debug python scripts

Actually this is the replica of https://stackoverflow.com/questions/35496298/pdb-automatically-append-to-sys-path

Probably it is tedious, but at least it works:

1. switch from python [my-script] to python -m pdb [my-script].
2. import sys
3. sys.path.append([full path to subdirectory where [module-XY] lies])
4. b [module-XY]:[line]

Sunday, January 14, 2018

Simple bandit algorithm in TensorFlow

I find it's nice to post here, so probably later more here.

Thanks for Prof. Richard S. Sutton and Andrew G. Barto for open sourcing their wonderful textbook Reinforcement Learning: An Introduction. Google boosts that TensorFlow is a general numerical library, so probably it can do everything. So I decide to implement most of the examples in the textbook in TensorFlow. So this post kicks off the trying, with first example simple bandit algorithm.

Please refer to Figure 2.1 in the textbook and the pseudo code in section 2.3 to try to understand the code. Here we go!

import tensorflow as tf
import sonnet as snt

class Bandit(snt.AbstractModule):
    def __init__(self, k, epsilon, num_iters, name = "bandit"):
        super(Bandit, self).__init__(name = name)
        self._k = k
        assert num_iters > 0, "invalid number of iterations"
        self._num_iters = num_iters
        assert epsilon > 0 and epsilon < 1, "invalid epsilon value"
        self._epsilon = epsilon
        with self._enter_variable_scope():
            self._means = [0.2, -0.8, 1.6, 0.4, 1.4, -1.6, -0.2, -1.0, 0.8, -0.6]
            self._R = tf.stack([tf.truncated_normal([self._num_iters], mean) for mean in self._means], axis = 0)
            self._Q = tf.get_variable("values", [self._k], tf.float32, tf.zeros_initializer, trainable = False)
            self._N = tf.get_variable("occurs", [self._k], tf.int32, tf.constant_initializer(1, tf.int32), trainable = False)

    def _build(self, it):

        probs = tf.random_uniform([self._num_iters], 0.0, 1.0)
        acts = tf.random_uniform([self._num_iters], 0, self._k, tf.int32)
        A = tf.cond(tf.gather(probs, it) >= self._epsilon, lambda: tf.argmax(self._Q, output_type=tf.int32), lambda: tf.gather(acts, it))
        R = tf.gather_nd(self._R, [A, it])
        self._N = tf.scatter_add(self._N, A, 1)
        R_incr =  tf.squeeze(1.0 / tf.cast(tf.gather(self._N, A), tf.float32) * (R - tf.gather(self._Q, A)))
        self._Q = tf.scatter_add(self._Q, A, R_incr)

        with tf.control_dependencies([self._Q, self._N]):
            A = tf.identity(A)
            R = tf.identity(R)

        return A, R

    def get_values(self):
        return self._Q

    def get_means(self):
        return self._means

def test():
    num_iters = 10000

    bandit10 = Bandit(10, 0.1, num_iters)

    it = tf.placeholder(tf.int32, [])
    a, r, r_incr = bandit10(it)
    q = bandit10.get_values()


    R_avg = tf.get_variable("average_reward", [], dtype = tf.float32, initializer = tf.zeros_initializer)
    R_avg = tf.assign_add(R_avg, r)
    tf.summary.scalar("action", a)
    tf.summary.scalar("reward", r)
    tf.summary.scalar("incremental_reward", r_incr)
    tf.summary.scalar("average_reward", tf.divide(R_avg, tf.cast(it, tf.float32)))
    tf.summary.text("estimated_values", tf.as_string(q))
    summ_op = tf.summary.merge_all()

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        writer = tf.summary.FileWriter("output", sess.graph)
        for i in range(num_iters):
            '''
            a_v, r_v, r_incr_v, q_v = sess.run([a, r, r_incr, q], feed_dict = {it: i})
            print("iteration {}: action {}, reward {}, incremental value {}".format(i, a_v, r_v, r_incr_v))
            print("estimated values are {}".format(q_v))
            '''

            print("iteration {}".format(i))
            summ_op_str = sess.run(summ_op, feed_dict = {it: i})
            writer.add_summary(summ_op_str, i)

        writer.close()

if __name__ == "__main__":
    test()