Discussion:
[theano-users] Conv3D and conv3d2d.conv3d give different results
Wang Xiyuan
2016-07-14 09:21:31 UTC
Permalink
Hello, everyone.

I was testing theano.tensor.nnet.conv3D
and theano.tensor.nnet.conv3d2d.conv3d on gpu, I expected that the two
functions give the same result. However, I found that when the input data
size is identical to the filter size and the output is a scale, the results
are different. When I switched to cpu, the results become the same. I'm
quite puzzled. I guess it has something to do with cudnn.

But is there any way to fix this?

I'm using Theano 8.2, cudnn v4, and cuda 7.5. and my OS is centos 6.5. A
simple example is in the attached script. (This script is actually from
adapting the one in the post:
https://groups.google.com/forum/#!msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ)

Thank you every much.

Xiyuan
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Frédéric Bastien
2016-07-15 06:28:23 UTC
Permalink
I don't have time to investigate. Can you give the output of your script?

On the GPU, the operation aren't done in the same order. Due to float
a+(b+c) !- (a+b)+c. So this could be the reason.

How big is the difference?
Post by Wang Xiyuan
Hello, everyone.
I was testing theano.tensor.nnet.conv3D
and theano.tensor.nnet.conv3d2d.conv3d on gpu, I expected that the two
functions give the same result. However, I found that when the input data
size is identical to the filter size and the output is a scale, the results
are different. When I switched to cpu, the results become the same. I'm
quite puzzled. I guess it has something to do with cudnn.
But is there any way to fix this?
I'm using Theano 8.2, cudnn v4, and cuda 7.5. and my OS is centos 6.5. A
simple example is in the attached script. (This script is actually from
https://groups.google.com/forum/#!msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ
)
Thank you every much.
Xiyuan
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
xiyuan wang
2016-07-15 08:34:35 UTC
Permalink
Thank you very much for the reply.

In the script, I'm doing a 3d flip-type convolution between a 3*2*2
(time*width*height) signal x:
array([[[[[ 0., 1.],
[ 2., 3.]]],
[[[ 4., 5.],
[ 6., 7.]]],
[[[ 8., 9.],
[ 10., 11.]]]]], dtype=float32)
and a 3*2*2 kernel W:
array([[[[[ 0., 0.],
[ 0., 0.]]],
[[[ 1., 0.],
[ 0., 0.]]],
[[[ 0., 0.],
[ 0., 0.]]]]], dtype=float32)
(to fit conv3d, the shape of x and W is (1,3,1,2,2).)

The correct 3d convolution result is 7 which is exactly the result of
conv3D. However, conv3d2d.conv3d gives 11 on gpu. Besides, if I change W to
array([[[[[ 0., 0.],
[ 0., 0.]]],
[[[ 0., 0.],
[ 0., 0.]]],
[[[ 1., 0.],
[ 0., 0.]]]]], dtype=float32)
conv3D gives 3 but conv3d2d.conv3d still gives 11. If I use cpu, both
conv3D and conv3d2d.conv3d give the same result. I'm wondering if this
problem could be reproduced.


On Fri, Jul 15, 2016 at 2:28 PM, Frédéric Bastien <
Post by Frédéric Bastien
I don't have time to investigate. Can you give the output of your script?
On the GPU, the operation aren't done in the same order. Due to float
a+(b+c) !- (a+b)+c. So this could be the reason.
How big is the difference?
Post by Wang Xiyuan
Hello, everyone.
I was testing theano.tensor.nnet.conv3D
and theano.tensor.nnet.conv3d2d.conv3d on gpu, I expected that the two
functions give the same result. However, I found that when the input data
size is identical to the filter size and the output is a scale, the results
are different. When I switched to cpu, the results become the same. I'm
quite puzzled. I guess it has something to do with cudnn.
But is there any way to fix this?
I'm using Theano 8.2, cudnn v4, and cuda 7.5. and my OS is centos 6.5. A
simple example is in the attached script. (This script is actually from
https://groups.google.com/forum/#!msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ
)
Thank you every much.
Xiyuan
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to a topic in the
Google Groups "theano-users" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/theano-users/MXBNPyNXxHA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
j***@gmail.com
2018-06-23 16:59:17 UTC
Permalink
First of all, sorry, I know that is a very old topic, but I am facing the
same problem and I am wondering if someone knows what this is happening,
since I get the same weird result when using conv3d2d.conv3d
Post by xiyuan wang
Thank you very much for the reply.
In the script, I'm doing a 3d flip-type convolution between a 3*2*2
array([[[[[ 0., 1.],
[ 2., 3.]]],
[[[ 4., 5.],
[ 6., 7.]]],
[[[ 8., 9.],
[ 10., 11.]]]]], dtype=float32)
array([[[[[ 0., 0.],
[ 0., 0.]]],
[[[ 1., 0.],
[ 0., 0.]]],
[[[ 0., 0.],
[ 0., 0.]]]]], dtype=float32)
(to fit conv3d, the shape of x and W is (1,3,1,2,2).)
The correct 3d convolution result is 7 which is exactly the result of
conv3D. However, conv3d2d.conv3d gives 11 on gpu. Besides, if I change W to
array([[[[[ 0., 0.],
[ 0., 0.]]],
[[[ 0., 0.],
[ 0., 0.]]],
[[[ 1., 0.],
[ 0., 0.]]]]], dtype=float32)
conv3D gives 3 but conv3d2d.conv3d still gives 11. If I use cpu, both
conv3D and conv3d2d.conv3d give the same result. I'm wondering if this
problem could be reproduced.
Post by Frédéric Bastien
I don't have time to investigate. Can you give the output of your script?
On the GPU, the operation aren't done in the same order. Due to float
a+(b+c) !- (a+b)+c. So this could be the reason.
How big is the difference?
Post by Wang Xiyuan
Hello, everyone.
I was testing theano.tensor.nnet.conv3D
and theano.tensor.nnet.conv3d2d.conv3d on gpu, I expected that the two
functions give the same result. However, I found that when the input data
size is identical to the filter size and the output is a scale, the results
are different. When I switched to cpu, the results become the same. I'm
quite puzzled. I guess it has something to do with cudnn.
But is there any way to fix this?
I'm using Theano 8.2, cudnn v4, and cuda 7.5. and my OS is centos 6.5. A
simple example is in the attached script. (This script is actually from
https://groups.google.com/forum/#!msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ
)
Thank you every much.
Xiyuan
--
---
You received this message because you are subscribed to the Google
Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to a topic in the
Google Groups "theano-users" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/theano-users/MXBNPyNXxHA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...