Post by Christopher Bourez- In this example, I'm importing a not modifyed GpuEye op from Theano
basic ops
- If I'm using theano.tensor.eye, then it does not use the GpuEye
OK, I assumed that you had started from the implementation of GpuEye to
implement a new GPU Op.
Your original example seems to work for me, though, so it may have to do
with your setup:
In [3]: import theano
...: from theano.gpuarray.basic_ops import GpuEye
...:
...: x = theano.tensor.iscalar('x')
...: y = theano.tensor.iscalar('y')
...: z = GpuEye(dtype='float32', context_name=None)(x,y, theano.tensor.
constant(0))
...:
...: theano.printing.debugprint(z)
...: print("Compiling")
...: f = theano.function( [x,y], z)
...: theano.printing.debugprint(f)
...: print("Results")
...: print(f(3, 3))
...:
GpuEye{dtype='float32', context_name=None} [id A] ''
|x [id B]
|y [id C]
|TensorConstant{0} [id D]
Compiling
GpuEye{dtype='float32', context_name=None} [id A] '' 0
|x [id B]
|y [id C]
|TensorConstant{0} [id D]
Results
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]]
Also, are you sure this test
Post by Christopher Bourezhttps://github.com/Theano/Theano/blob/2625464534147fd70da60a3a3ddcb63ed8e5a416/theano/gpuarray/tests/test_basic_ops.py#L401
works well ?
Yes, it gets tested in our daily buildbot and on several pull requests per
week, by our continuous integration systems. I also just launched it
manually:
$ theano-nose theano/gpuarray/tests/test_basic_ops.py:test_gpueye
Can not use cuDNN on context None: Disabled by dnn.enabled flag
Mapped name None to device cuda: GeForce GTX 580 (0000:02:00.0)
.............................................
----------------------------------------------------------------------
Ran 45 tests in 21.645s
OK
I've also tried to create an example with theano.gpuarray.nnet.GpuSoftmax but
Post by Christopher Bourezafter compilation it got replaced another implementation*GpuDnnSoftmax : *
Yes, there is an optimization that does that if cuDNN is available. You
should be able to disable it with `optimizer_excluding=local_softmax_dnn`.
A second thing that is not clear to me in the documentation of Theano is
Post by Christopher Bourezhow you specify a C implementation and GPU implementation of the same own
op. Thank you
You do not specify C and GPU implementations for the same Op, what we have
in general is two different Ops, one that has CPU inputs and outputs, and
computes on CPU, and another one with GPU inputs and outputs, that computes
on GPU.
This is necessary because the Variables in Theano are strongly typed, and
the device is part of the type.
There are optimizations that replace CPU Ops by GPU ones, inserting
transfer Ops (GpuFromHost, HostFromGpu) if necessary.
GPU Ops, like CPU ones, can have C (using CUDA) or Python implementations
(or both).
What surprises me is to get seg faults in the theano function, while I
Post by Christopher Bourezwould have expected them to occur during evaluation on values...
It is strange indeed. It may be possible that some GPU operations are
executed on GPU during the compilation phase, for constant folding
(constant propagation) for instance.
Does it happen as well with the latest master from GitHub?
Post by Christopher BourezPost by Christopher Bourez*Elemwise{mul,no_inplace} [id A] '' |HostFromGpu(gpuarray) [id B] ''
| |GpuSoftmax [id C] '' | |GpuFromHost<dev0> [id D] '' | |x
[id E] |InplaceDimShuffle{x,x} [id F] '' |TensorConstant{2} [id
G]CompilingHostFromGpu(gpuarray) [id A] '' 5 |GpuElemwise{Mul}[(0,
1)]<gpuarray> [id B] '' 4 |GpuArrayConstant{[[ 2.]]} [id C]
|InplaceGpuDimShuffle{0,1} [id D] '' 3
|GpuDnnSoftmax{mode='channel', algo='accurate'} [id E] '' 2
|GpuContiguous [id F] '' 1 |InplaceGpuDimShuffle{0,1,x,x} [id G]
'' 0 |<GpuArrayType<dev0>(float32, (False, False))> [id H]*I'm
looking of a good example with a GPU Kernel.
Post by Pascal LamblinDoes it work if you do not modify the source for GpuEye at all?
If it does, then maybe sharing your new source would get you more help.
Post by Christopher BourezHi,
I'm trying to implement a simple GPU op but it always gives me a
Segmentation fault during compilation, without other message.
import theano
from theano.gpuarray.basic_ops import GpuEye
x = theano.tensor.iscalar('x')
y = theano.tensor.iscalar('y')
z = GpuEye(dtype='float32', context_name=None)(x,y,
theano.tensor.constant(0))
theano.printing.debugprint(z)
print("Compiling")
f = theano.function( [x,y], z)
theano.printing.debugprint(f)
print("Results")
print(f(3, 3))
I've also tried with the softmax gpu function. Is there something I'm
missing ?
I copied the file, created a complete new op, and the segmentation
fault appears when I'm defining a Kernel in gpu_kernels() method of the op.
Thank you a lot for your help
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.