Wenpeng Yin
2017-07-18 20:20:41 UTC
Hi guys,
I have a long-term problem when running theano code in GPU: even I use two
command windows to run the same program (on the same GPU or different
GPUs), they show different performances. It's hard to say the difference is
small or big, depending on the task. This makes difficult to judge a
program modification is better or worse.
I can not find the problem, as I notice that I always use the same random
seed, for example "rng = numpy.random.RandomState(23455)", whenever I
create parameters, so they are expected to repeat the process, right?
The only thing I can think about is that GPU uses 32 bits, not 64, this
will lose precision?
Thanks for any hints.
I have a long-term problem when running theano code in GPU: even I use two
command windows to run the same program (on the same GPU or different
GPUs), they show different performances. It's hard to say the difference is
small or big, depending on the task. This makes difficult to judge a
program modification is better or worse.
I can not find the problem, as I notice that I always use the same random
seed, for example "rng = numpy.random.RandomState(23455)", whenever I
create parameters, so they are expected to repeat the process, right?
The only thing I can think about is that GPU uses 32 bits, not 64, this
will lose precision?
Thanks for any hints.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.