Discussion:
[theano-users] Computing partial derivatives of a MLP
Bernardo Biesseck
2016-09-12 15:01:06 UTC
Permalink
I'm using Keras library but my problem relies in Theano functions, that's
because I posted this question here. I need to compute partial derivatives
separately because later one of them will be calculated numerically. For
while I'm just trying to compute the same derivatives as Theano does in a
simple MLP. Partial derivatives of the last layer in a common MLP are
calculated as

dL/dW = dL/d_ypred * d_ypred/d_netoutput * d_netoutput/d_W,

where

L = Loss function = sqrt(sum(square(y_true - y_pred))) (euclidean distance)
y_pred = sigmoid(net_output)
net_output = f(X,W) + b.

I modified mnist_siamese_graph.py
<https://github.com/fchollet/keras/blob/master/examples/mnist_siamese_graph.py>
program to a simple Object Oriented version (to get access to some
variables), as follows at the end of this post. Well, if I compute
gradients as

grads = {}
for wrt in trainable_weights:
grads[wrt] = T.grad(total_loss, wrt)


and pass "grads" dictionary as argument to "known_grads" parameter in
"theano.tensor.grad()" method the entire process runs exactly as original
version, but I can't compute each partial derivative separated. What I'm
trying to do is

def compute_gradients2(self, total_loss, trainable_weights):
#total_loss = Elemwise{mul,no_inplace}.0
#trainable_weights = [dense_1_W, dense_1_b, dense_2_W, dense_2_b, dense_3_W, dense_3_b]
grads = {}
dLoss_dypred = T.grad(total_loss, self.y_pred)
for wrt in trainable_weights:
dypred_dnetoutput1 = T.grad(self.y_pred, self.processed_a)
dypred_dnetoutput2 = T.grad(self.y_pred, self.processed_b)
dnetoutput1_dW = T.grad(self.processed_a.output, wrt)
dnetoutput2_dW = T.grad(self.processed_b.output, wrt)
grads[wrt] = (dLoss_dypred * dypred_dnetoutput1 * dnetoutput1_dW) + (dLoss_dypred * dypred_dnetoutput2 * dnetoutput2_dW)
return grads



The error is in line

dypred_dnetoutput1 = T.grad(self.y_pred, self.processed_a)
*TypeError: cost must be a scalar.*


because

self.y_pred.ndim = 2



My complete source-code is bellow:

from __future__ import absolute_import
from __future__ import print_function
import numpy as np

import random
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Input, Lambda
from keras.optimizers import SGD, RMSprop
from keras import backend as K
from theano import tensor as T
from keras.engine.training import *



class MNIST_SIAMESE(object):

def __init__(self):
self.seq = None
self.input_a = None
self.input_b = None
self.processed_a = None
self.processed_b = None
self.distance = None
self.model = None
self.rms = None


def setModel(self, input_dim):
# Base network to be shared (eq. to feature extraction).
self.seq = Sequential()
self.seq.add(Dense(128, input_shape=(input_dim,), activation='relu'))
self.seq.add(Dropout(0.1))
self.seq.add(Dense(128, activation='relu'))
self.seq.add(Dropout(0.1))
self.seq.add(Dense(128, activation='relu'))

self.input_a = Input(shape=(input_dim,))
self.input_b = Input(shape=(input_dim,))

# because we re-use the same instance `base_network`,
# the weights of the network
# will be shared across the two branches
self.processed_a = self.seq(self.input_a)
self.processed_b = self.seq(self.input_b)

self.distance = Lambda(self.euclidean_distance, output_shape=self.eucl_dist_output_shape)([self.processed_a, self.processed_b])
self.model = Model(input=[self.input_a, self.input_b], output=self.distance)

# train
self.rms = RMSprop()
self.model.compile(loss=self.contrastive_loss, optimizer=self.rms)


def compute_gradients1(self, total_loss, variables):
#loss = Elemwise{mul,no_inplace}.0
#variables = [dense_1_W, dense_1_b, dense_2_W, dense_2_b, dense_3_W, dense_3_b]
grads = {}
for wrt in variables:
grads[wrt] = T.grad(total_loss, wrt)
return grads


def compute_gradients2(self, total_loss, variables):
#loss = Elemwise{mul,no_inplace}.0
#variables = [dense_1_W, dense_1_b, dense_2_W, dense_2_b, dense_3_W, dense_3_b]
grads = {}
dLoss_dypred = T.grad(total_loss, self.y_pred)
for wrt in variables:
dypred_dnetoutput1 = T.grad(self.y_pred, self.processed_a)
dypred_dnetoutput2 = T.grad(self.y_pred, self.processed_b)
dnetoutput1_dW = T.grad(self.processed_a.output, wrt)
dnetoutput2_dW = T.grad(self.processed_b.output, wrt)
grads[wrt] = (dLoss_dypred * dypred_dnetoutput1 * dnetoutput1_dW) + (dLoss_dypred * dypred_dnetoutput2 * dnetoutput2_dW)
return grads


def euclidean_distance(self, vects):
x, y = vects
self.euclDist = K.sqrt(K.sum(K.square(x - y), axis=1, keepdims=True))
return self.euclDist


def eucl_dist_output_shape(self, shapes):
shape1, shape2 = shapes
return (shape1[0], 1)


def contrastive_loss(self, y_true, y_pred):
'''Contrastive loss from Hadsell-et-al.'06
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
'''
self.y_pred = y_pred
margin = 1
return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))


def train(self, train_set, val_set, batch_size, nb_epoch):
train_pairs, train_y = train_set
test_pairs, test_y = val_set

# Original
# self.model.fit([train_pairs[:, 0], train_pairs[:, 1]], train_y,
# validation_data=([test_pairs[:, 0], test_pairs[:, 1]], test_y),
# batch_size=batch_size,
# nb_epoch=nb_epoch)

print('Computing gradients...')
grads = self.compute_gradients2(self.model.total_loss, self.model.layers[2].trainable_weights)
self.model.fit_knownGrads([train_pairs[:, 0], train_pairs[:, 1]], train_y,
validation_data=([test_pairs[:, 0], test_pairs[:, 1]], test_y),
batch_size=batch_size,
nb_epoch=nb_epoch,
knownGrads=grads)


def predict(self, samples):
labels = self.model.predict([samples[:, 0], samples[:, 1]])
return labels


def create_pairs(x, digit_indices):
'''Positive and negative pair creation.
Alternates between positive and negative pairs.
'''
pairs = []
labels = []
n = min([len(digit_indices[d]) for d in range(10)]) - 1
for d in range(10):
for i in range(n):
z1, z2 = digit_indices[d][i], digit_indices[d][i+1]
pairs += [[x[z1], x[z2]]]
inc = random.randrange(1, 10)
dn = (d + inc) % 10
z1, z2 = digit_indices[d][i], digit_indices[dn][i]
pairs += [[x[z1], x[z2]]]
labels += [1, 0]
return np.array(pairs), np.array(labels)


def compute_accuracy(predictions, labels):
'''Compute classification accuracy with a fixed threshold on distances.
'''
return labels[predictions.ravel() < 0.5].mean()



if __name__ == '__main__':

np.random.seed(1337) # for reproducibility

# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
input_dim = 784
batchh_size = 128
nb_epochh = 3

# create training+test positive and negative pairs
digit_indices = [np.where(y_train == i)[0] for i in range(10)]
tr_pairs, tr_y = create_pairs(X_train, digit_indices)
trainnn_set = [tr_pairs, tr_y]

digit_indices = [np.where(y_test == i)[0] for i in range(10)]
te_pairs, te_y = create_pairs(X_test, digit_indices)
testtt_set = [te_pairs, te_y]

# network definition
base_network = MNIST_SIAMESE()
base_network.setModel(input_dim)

base_network.train(trainnn_set, testtt_set, batchh_size, nb_epochh)

# compute final accuracy on training and test sets
pred = base_network.predict(tr_pairs)
tr_acc = compute_accuracy(pred, tr_y)
pred = base_network.predict(te_pairs)
te_acc = compute_accuracy(pred, te_y)

print('* Accuracy on training set: %0.2f%%' % (100 * tr_acc))
print('* Accuracy on test set: %0.2f%%' % (100 * te_acc))



I thank any contribution!
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Nicolás Mercado
2018-01-25 04:38:00 UTC
Permalink
Were you able to fine a workaround?
My suggestion is to use T.Lop instead of T.grad.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Bernardo Biesseck
2018-01-25 12:51:48 UTC
Permalink
Yes I solved the problem. Theano doesn't calculate derivatives of vector
with respecto to another vector or matrix, it must be a scalar. To compute
derivative of a vector one can use a loop and use the function T.grad() for
each element. That's what I did.

Best regards!

Em quinta-feira, 25 de janeiro de 2018 01:38:00 UTC-3, Nicolás Mercado
Post by Nicolás Mercado
Were you able to fine a workaround?
My suggestion is to use T.Lop instead of T.grad.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Nicolás Mercado
2018-01-25 14:02:59 UTC
Permalink
Hey. Thanks for your prompt response.
Just wanted to ask.... you mean you used a loop over each of the elements
w.r.t. you are trying to calculate the partial derivative?
Did you achieve that with T.scan combined with T.grad, or just a
handwritten loop?

Thanks!
Post by Bernardo Biesseck
Yes I solved the problem. Theano doesn't calculate derivatives of vector
with respecto to another vector or matrix, it must be a scalar. To compute
derivative of a vector one can use a loop and use the function T.grad() for
each element. That's what I did.
Best regards!
Em quinta-feira, 25 de janeiro de 2018 01:38:00 UTC-3, Nicolás Mercado
Post by Nicolás Mercado
Were you able to fine a workaround?
My suggestion is to use T.Lop instead of T.grad.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Bernardo Biesseck
2018-01-25 14:33:26 UTC
Permalink
Exactly, I used T.scan() combined with T.grad(). Just to avoid confusion,
you must to iterate over the variable witch represents the function you
want to compute derivative, not over the variables w.r.t. Example: f=2*x,
where x∊R^2. The variable f∊R^2 then T.grad() will not compute its
derivative w.r.t. x directly. Using T.scan() one can get ∂f[0]/∂x and ∂f[1]/
∂x.

Em quinta-feira, 25 de janeiro de 2018 11:02:59 UTC-3, Nicolás Mercado
Post by Nicolás Mercado
Hey. Thanks for your prompt response.
Just wanted to ask.... you mean you used a loop over each of the elements
w.r.t. you are trying to calculate the partial derivative?
Did you achieve that with T.scan combined with T.grad, or just a
handwritten loop?
Thanks!
Post by Bernardo Biesseck
Yes I solved the problem. Theano doesn't calculate derivatives of vector
with respecto to another vector or matrix, it must be a scalar. To compute
derivative of a vector one can use a loop and use the function T.grad() for
each element. That's what I did.
Best regards!
Em quinta-feira, 25 de janeiro de 2018 01:38:00 UTC-3, Nicolás Mercado
Post by Nicolás Mercado
Were you able to fine a workaround?
My suggestion is to use T.Lop instead of T.grad.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...