Yaojie Lu
2018-07-30 01:26:26 UTC
Hello,
I want to create a custom Op for the use in PyMC3.
This Op finds the root of the function: f(x) = x + env*exp(x) - a*b^2 where
env = np.array([1, 2]) so the root-finding function should return a vector
for any given pair of a & b.
This is how I define it:
from scipy import optimize
import numpy as np
import theano
import theano.tensor as tt
envod = np.array([1, 2])
def func(x, a, b, env):
value = x + env * np.exp(x) - a * b**2
return value
def jac(x, a, b, env):
jac = 1 + env * np.exp(x)
return jac
def x_from_ab(a, b, env):
Len = len(env)
value = np.zeros(Len)
for i in range(len(envod)):
value[i] = optimize.newton(func, 1, fprime = jac, args = (a, b,
env[i]))
return value
class Xf(tt.Op):
itypes = [tt.dscalar, tt.dscalar]
otypes = [tt.dvector]
def perform(self, node, inputs, outputs):
a, b = inputs
x = x_from_ab(a, b, envod)
outputs[0][0] = np.array(x)
def grad(self, inputs, output_gradients):
a, b = inputs
x = self(a, b)
g, = output_gradients
return [-g[0] * (-b**2)/(1 + envod[0] * tt.exp(x[0])), -g[0] *
(-2*a*b)/(1 + envod[0] * tt.exp(x[0]))]
I wonder how should I define grad()? I have read all the
posts/documentations that I have found. Any suggestion or link to some
useful reference is welcome.
Many thanks!
I want to create a custom Op for the use in PyMC3.
This Op finds the root of the function: f(x) = x + env*exp(x) - a*b^2 where
env = np.array([1, 2]) so the root-finding function should return a vector
for any given pair of a & b.
This is how I define it:
from scipy import optimize
import numpy as np
import theano
import theano.tensor as tt
envod = np.array([1, 2])
def func(x, a, b, env):
value = x + env * np.exp(x) - a * b**2
return value
def jac(x, a, b, env):
jac = 1 + env * np.exp(x)
return jac
def x_from_ab(a, b, env):
Len = len(env)
value = np.zeros(Len)
for i in range(len(envod)):
value[i] = optimize.newton(func, 1, fprime = jac, args = (a, b,
env[i]))
return value
class Xf(tt.Op):
itypes = [tt.dscalar, tt.dscalar]
otypes = [tt.dvector]
def perform(self, node, inputs, outputs):
a, b = inputs
x = x_from_ab(a, b, envod)
outputs[0][0] = np.array(x)
def grad(self, inputs, output_gradients):
a, b = inputs
x = self(a, b)
g, = output_gradients
return [-g[0] * (-b**2)/(1 + envod[0] * tt.exp(x[0])), -g[0] *
(-2*a*b)/(1 + envod[0] * tt.exp(x[0]))]
I wonder how should I define grad()? I have read all the
posts/documentations that I have found. Any suggestion or link to some
useful reference is welcome.
Many thanks!
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.