Discussion:
[theano-users] Theano 0.9.0: GPU is printed, but not used?
Meier Benjamin
2017-06-15 21:45:25 UTC
Permalink
Hello,

I use the follwing test program:
https://theano.readthedocs.io/en/0.8.x/tutorial/using_gpu.html

from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')

And I get this output:

***@21cfc9b009d4:/code/tmp/test# THEANO_FLAGS='floatX=float32,device=cuda0' python gpu_test.py
Using cuDNN version 5105 on context None
Mapped name None to device cuda0: TITAN X (Pascal) (0000:87:00.0)
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, (False,))>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.221684 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the cpu


For some reason theano still uses the CPU? But it already prints the GPU infos? Do I do something wrong?

Thank you very much
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Daniel Seita
2017-06-16 22:37:28 UTC
Permalink
Not sure if this affects the result but note that the link you provided is
for theano 0.8.X, not theano 0.9.0 as your title implies.
Post by Meier Benjamin
Hello,
https://theano.readthedocs.io/en/0.8.x/tutorial/using_gpu.html
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
print('Used the cpu')
print('Used the gpu')
Using cuDNN version 5105 on context None
Mapped name None to device cuda0: TITAN X (Pascal) (0000:87:00.0)
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, (False,))>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.221684 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the cpu
For some reason theano still uses the CPU? But it already prints the GPU infos? Do I do something wrong?
Thank you very much
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Meier Benjamin
2017-06-18 21:31:36 UTC
Permalink
Thanks for the hint:) Your are right.

I just searched the code for Theano 0.9
(link: http://deeplearning.net/software/theano/tutorial/using_gpu.html) and
used it for another test. Unfortunately the effect is the same.

Maybe it really works for this example code, but for my application it does
not seem to work. It is as slow with the GPU flag as with the CPU flag.
With older versions of theano (and lasagne) it worked, but I also changed
the GPU (GTX 780 to Titan X pascal).
Post by Daniel Seita
Not sure if this affects the result but note that the link you provided is
for theano 0.8.X, not theano 0.9.0 as your title implies.
Post by Meier Benjamin
Hello,
https://theano.readthedocs.io/en/0.8.x/tutorial/using_gpu.html
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
print('Used the cpu')
print('Used the gpu')
Using cuDNN version 5105 on context None
Mapped name None to device cuda0: TITAN X (Pascal) (0000:87:00.0)
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, (False,))>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.221684 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the cpu
For some reason theano still uses the CPU? But it already prints the GPU infos? Do I do something wrong?
Thank you very much
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Frédéric Bastien
2017-06-19 22:15:27 UTC
Permalink
The code team on the GPU. This code is very simple, I'm not surprised that
it don't get always speed up.

You use the GPU well. The problem is in the detection that select the
print. It need to be updated for the new backend.
Post by Meier Benjamin
Thanks for the hint:) Your are right.
http://deeplearning.net/software/theano/tutorial/using_gpu.html) and used
it for another test. Unfortunately the effect is the same.
Maybe it really works for this example code, but for my application it
does not seem to work. It is as slow with the GPU flag as with the CPU
flag. With older versions of theano (and lasagne) it worked, but I also
changed the GPU (GTX 780 to Titan X pascal).
Post by Daniel Seita
Not sure if this affects the result but note that the link you provided
is for theano 0.8.X, not theano 0.9.0 as your title implies.
Post by Meier Benjamin
Hello,
https://theano.readthedocs.io/en/0.8.x/tutorial/using_gpu.html
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
print('Used the cpu')
print('Used the gpu')
Using cuDNN version 5105 on context None
Mapped name None to device cuda0: TITAN X (Pascal) (0000:87:00.0)
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, (False,))>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.221684 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the cpu
For some reason theano still uses the CPU? But it already prints the GPU infos? Do I do something wrong?
Thank you very much
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...