Discussion:
[theano-users] theano assert error.
Ragav Venkatesan
2017-03-08 05:38:31 UTC
Permalink
I have never seen this error and I am unable to understand it. Any help
will be much appreciated.
Theano 0.9rc3 using the cuda backend.

storage_map=getattr(self.fn, 'storage_map', None))

File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/gof/link.py",
line 325, in raise_with_op

reraise(exc_type, exc_value, exc_trace)

File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py",
line 884, in __call__

self.fn() if output_subset is None else\

AssertionError: Theano Assert failed!

Apply node that caused the error: Assert{msg='Theano Assert
failed!'}(GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0,
TensorConstant{False})

Toposort index: 140

Inputs types: [CudaNdarrayType(float32, 4D), TensorType(bool, scalar)]

Inputs shapes: [(100, 1, 28, 28), ()]

Inputs strides: [(784, 0, 28, 1), ()]

Inputs values: ['not shown', array(False, dtype=bool)]

Inputs type_num: ['', 0]

Outputs clients: [[Assert{msg='Theano Assert failed!'}(Assert{msg='Theano
Assert failed!'}.0, TensorConstant{False})]]


Debugprint of the apply node:

Assert{msg='Theano Assert failed!'} [id A] <CudaNdarrayType(float32, 4D)>
''

|GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)] [id B]
<CudaNdarrayType(float32, 4D)> ''

| |GpuDnnConvGradI{algo='none', inplace=True} [id C]
<CudaNdarrayType(float32, 4D)> ''

| | |GpuContiguous [id D] <CudaNdarrayType(float32, 4D)> ''

| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>

| | |GpuContiguous [id F] <CudaNdarrayType(float32, 4D)> ''

| | | |GpuReshape{4} [id G] <CudaNdarrayType(float32, 4D)> ''

| | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 + i2))))}}[(0,
1)] [id H] <CudaNdarrayType(float32, matrix)> ''

| | | | |CudaNdarrayConstant{[[ 0.5]]} [id I] <CudaNdarrayType(float32,
(True, True))>

| | | | |GpuDot22 [id J] <CudaNdarrayType(float32, matrix)> ''

| | | | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 +
i2))))}}[(0, 1)] [id K] <CudaNdarrayType(float32, matrix)> ''

| | | | | | |CudaNdarrayConstant{[[ 0.5]]} [id I]
<CudaNdarrayType(float32, (True, True))>

| | | | | | |GpuDot22 [id L] <CudaNdarrayType(float32, matrix)> ''

| | | | | | | |GpuReshape{2} [id M] <CudaNdarrayType(float32, matrix)>
''

| | | | | | | | |GpuJoin [id N] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | |TensorConstant{0} [id O] <TensorType(int8, scalar)>

| | | | | | | | | |GpuElemwise{Composite{(i0 * cos(i1))},no_inplace} [id
P] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | | |CudaNdarrayConstant{[-2.]} [id R]
<CudaNdarrayType(float32, (True,))>

| | | | | | | | | | | |GpuSubtensor{:int64:} [id S]
<CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | | | |<CudaNdarrayType(float32, vector)> [id U]
<CudaNdarrayType(float32, vector)>

| | | | | | | | | | | | |TensorConstant{(1,) of 1000} [id V]
<TensorType(int64, (True,))>

| | | | | | | | | | | |Constant{500} [id W] <int64>

| | | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | |CudaNdarrayConstant{[ 6.28318548]} [id Y]
<CudaNdarrayType(float32, (True,))>

| | | | | | | | | | |GpuSubtensor{int64::} [id Z]
<CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | | |Constant{500} [id W] <int64>

| | | | | | | | | |GpuElemwise{Composite{(i0 * sin(i1))}}[(0, 0)] [id
BA] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''

| | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''

| | | | | | | | |TensorConstant{[100 10]} [id BB] <TensorType(int64,
vector)>

| | | | | | | |weights [id BC] <CudaNdarrayType(float32, matrix)>

| | | | | | |GpuDimShuffle{x,0} [id BD] <CudaNdarrayType(float32, row)>
''

| | | | | | |bias [id BE] <CudaNdarrayType(float32, vector)>

| | | | | |weights [id BF] <CudaNdarrayType(float32, matrix)>

| | | | |GpuDimShuffle{x,0} [id BG] <CudaNdarrayType(float32, row)> ''

| | | | |bias [id BH] <CudaNdarrayType(float32, vector)>

| | | |TensorConstant{[100 10 13 13]} [id BI] <TensorType(int64,
vector)>

| | |GpuAllocEmpty [id BJ] <CudaNdarrayType(float32, 4D)> ''

| | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>

| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''

| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>

| | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>

| | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>

| | |GpuDnnConvDesc{border_mode='valid', subsample=(2, 2),
conv_mode='conv', precision='float32'} [id BO]
<CDataType{cudnnConvolutionDescriptor_t}> ''

| | | |MakeVector{dtype='int64'} [id BP] <TensorType(int64, vector)> ''

| | | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>

| | | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''

| | | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>

| | | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>

| | | |MakeVector{dtype='int64'} [id BQ] <TensorType(int64, vector)> ''

| | | |Shape_i{0} [id BR] <TensorType(int64, scalar)> ''

| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>

| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''

| | | |Shape_i{2} [id BS] <TensorType(int64, scalar)> ''

| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>

| | | |Shape_i{3} [id BT] <TensorType(int64, scalar)> ''

| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>

| | |Constant{1.0} [id BU] <float32>

| | |Constant{0.0} [id BV] <float32>

| |GpuDimShuffle{x,0,x,x} [id BW] <CudaNdarrayType(float32, (True, False,
True, True))> ''

| |bias [id BX] <CudaNdarrayType(float32, vector)>

|TensorConstant{False} [id BY] <TensorType(bool, scalar)>


Storage map footprint:

- <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (50000, 784),
ElemSize: 4 Byte(s), TotalSize: 156800000 Byte(s)

- weights, Shared Input, Shape: (1200, 1690), ElemSize: 4 Byte(s),
TotalSize: 8112000 Byte(s)

- weights, Shared Input, Shape: (1250, 1200), ElemSize: 4 Byte(s),
TotalSize: 6000000 Byte(s)

- weights, Shared Input, Shape: (240, 1200), ElemSize: 4 Byte(s),
TotalSize: 1152000 Byte(s)

- GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0, Shape: (100, 1, 28,
28), ElemSize: 4 Byte(s), TotalSize: 313600 Byte(s)

- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (50000,),
ElemSize: 4 Byte(s), TotalSize: 200000 Byte(s)

- weights, Shared Input, Shape: (10, 1200), ElemSize: 4 Byte(s),
TotalSize: 48000 Byte(s)

- filterbank, Shared Input, Shape: (50, 20, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 36000 Byte(s)

- GpuContiguous.0, Shape: (50, 20, 3, 3), ElemSize: 4 Byte(s), TotalSize:
36000 Byte(s)

- bias, Shared Input, Shape: (1690,), ElemSize: 4 Byte(s), TotalSize: 6760
Byte(s)

- bias, Shared Input, Shape: (1200,), ElemSize: 4 Byte(s), TotalSize: 4800
Byte(s)

- bias, Shared Input, Shape: (1200,), ElemSize: 4 Byte(s), TotalSize: 4800
Byte(s)

- bias, Shared Input, Shape: (1200,), ElemSize: 4 Byte(s), TotalSize: 4800
Byte(s)

- GPU_mrg_uniform{CudaNdarrayType(float32, vector),inplace}.0, Shape:
(996,), ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)

- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (996,),
ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)

- filterbank, Shared Input, Shape: (20, 1, 5, 5), ElemSize: 4 Byte(s),
TotalSize: 2000 Byte(s)

- weights, Shared Input, Shape: (240, 1), ElemSize: 4 Byte(s), TotalSize:
960 Byte(s)

- filterbank, Shared Input, Shape: (10, 1, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 360 Byte(s)

- bias, Shared Input, Shape: (50,), ElemSize: 4 Byte(s), TotalSize: 200
Byte(s)

- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 50, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 200 Byte(s)

- bias, Shared Input, Shape: (20,), ElemSize: 4 Byte(s), TotalSize: 80
Byte(s)

- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 20, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 80 Byte(s)

- MakeVector{dtype='int64'}.0, Shape: (6,), ElemSize: 8 Byte(s),
TotalSize: 48 Byte(s)

- Join.0, Shape: (4,), ElemSize: 8 Byte(s), TotalSize: 32 Byte(s)

- TensorConstant{[100 20 12 12]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)

- TensorConstant{[100 1 28 28]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)

- TensorConstant{[100 10 13 13]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)

- TensorConstant{[100 28 28]}, Shape: (3,), ElemSize: 8 Byte(s),
TotalSize: 24 Byte(s)

- TensorConstant{(2,) of 0}, Shape: (2,), ElemSize: 8 Byte(s), TotalSize:
16 Byte(s)

- TensorConstant{[ 100 1250]}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)

- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)

- TensorConstant{[100 10]}, Shape: (2,), ElemSize: 8 Byte(s), TotalSize:
16 Byte(s)

- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)

- TensorConstant{(2,) of 2}, Shape: (2,), ElemSize: 8 Byte(s), TotalSize:
16 Byte(s)

- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)

- TensorConstant{[100 -1]}, Shape: (2,), ElemSize: 8 Byte(s), TotalSize:
16 Byte(s)

- Constant{3}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Constant{2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Constant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Elemwise{mul,no_inplace}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize:
8.0 Byte(s)

- index, Input, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Constant{4}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{12}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{(1,) of 1000}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)

- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{(1,) of 100}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)

- TensorConstant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{100}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Constant{500}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- Elemwise{Composite{(i0 + (((i1 + Composite{Switch(LT(i0, i1), i1,
i0)}(i2, i3)) - Switch(LT(Composite{Switch(LT(i0, i1), i1,
i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
Composite{Switch(LT(i0, i1), i1, i0)}(i2, i3)), Composite{Switch(LT(i0,
i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
Composite{Switch(LT(i0, i1), i1, i0)}(i2, i3))) // i5))}}[(0, 2)].0, Shape:
(), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- TensorConstant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)

- TensorConstant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- Constant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)

- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)

- GpuCAReduce{add}{1,1}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)

- Constant{1.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)

- CudaNdarrayConstant{[-2.]}, Shape: (1,), ElemSize: 4 Byte(s), TotalSize:
4 Byte(s)

- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)

- CudaNdarrayConstant{-0.5}, Shape: (), ElemSize: 4 Byte(s), TotalSize:
4.0 Byte(s)

- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)

- CudaNdarrayConstant{[[ 0.5]]}, Shape: (1, 1), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)

- CudaNdarrayConstant{0.5}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)

- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)

- CudaNdarrayConstant{[ 6.28318548]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)

- CudaNdarrayConstant{[[[[ 0.5]]]]}, Shape: (1, 1, 1, 1), ElemSize: 4
Byte(s), TotalSize: 4 Byte(s)

- Constant{0.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)

- TensorConstant{10}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)

- TensorConstant{20}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)

- TensorConstant{0}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)

- TensorConstant{5}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)

- TensorConstant{3}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)

- TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)

- Elemwise{eq,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
1.0 Byte(s)

- TensorConstant{50}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)

- Elemwise{eq,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
1.0 Byte(s)

- Elemwise{eq,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
1.0 Byte(s)

- Elemwise{eq,no_inplace}.0, Shape: (), ElemSize: 1 Byte(s), TotalSize:
1.0 Byte(s)

- TensorConstant{False}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)

TotalSize: 172726976.0 Byte(s) 0.161 GB

TotalSize inputs: 172377152.0 Byte(s) 0.161 GB


HINT: Re-running with most Theano optimization disabled could give you a
back-trace of when this node was created. This can be done with by setting
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano
optimizations can be disabled with 'optimizer=None'.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Frédéric Bastien
2017-03-08 13:59:06 UTC
Permalink
There is a run time assert in the graph that fail. To find where it got
created, try with this Theano flag. It will probably add in the error
message the stack trace where this assert was created:

optimizer=fast_compile

If not, try optimizer=None.

Fred
Post by Ragav Venkatesan
I have never seen this error and I am unable to understand it. Any help
will be much appreciated.
Theano 0.9rc3 using the cuda backend.
storage_map=getattr(self.fn, 'storage_map', None))
File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/gof/link.py",
line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py",
line 884, in __call__
self.fn() if output_subset is None else\
AssertionError: Theano Assert failed!
Apply node that caused the error: Assert{msg='Theano Assert
failed!'}(GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0,
TensorConstant{False})
Toposort index: 140
Inputs types: [CudaNdarrayType(float32, 4D), TensorType(bool, scalar)]
Inputs shapes: [(100, 1, 28, 28), ()]
Inputs strides: [(784, 0, 28, 1), ()]
Inputs values: ['not shown', array(False, dtype=bool)]
Inputs type_num: ['', 0]
Outputs clients: [[Assert{msg='Theano Assert failed!'}(Assert{msg='Theano
Assert failed!'}.0, TensorConstant{False})]]
Assert{msg='Theano Assert failed!'} [id A] <CudaNdarrayType(float32, 4D)>
''
|GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)] [id B]
<CudaNdarrayType(float32, 4D)> ''
| |GpuDnnConvGradI{algo='none', inplace=True} [id C]
<CudaNdarrayType(float32, 4D)> ''
| | |GpuContiguous [id D] <CudaNdarrayType(float32, 4D)> ''
| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | |GpuContiguous [id F] <CudaNdarrayType(float32, 4D)> ''
| | | |GpuReshape{4} [id G] <CudaNdarrayType(float32, 4D)> ''
| | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 + i2))))}}[(0,
1)] [id H] <CudaNdarrayType(float32, matrix)> ''
| | | | |CudaNdarrayConstant{[[ 0.5]]} [id I] <CudaNdarrayType(float32,
(True, True))>
| | | | |GpuDot22 [id J] <CudaNdarrayType(float32, matrix)> ''
| | | | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 +
i2))))}}[(0, 1)] [id K] <CudaNdarrayType(float32, matrix)> ''
| | | | | | |CudaNdarrayConstant{[[ 0.5]]} [id I]
<CudaNdarrayType(float32, (True, True))>
| | | | | | |GpuDot22 [id L] <CudaNdarrayType(float32, matrix)> ''
| | | | | | | |GpuReshape{2} [id M] <CudaNdarrayType(float32, matrix)>
''
| | | | | | | | |GpuJoin [id N] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |TensorConstant{0} [id O] <TensorType(int8, scalar)>
| | | | | | | | | |GpuElemwise{Composite{(i0 * cos(i1))},no_inplace}
[id P] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | |CudaNdarrayConstant{[-2.]} [id R]
<CudaNdarrayType(float32, (True,))>
| | | | | | | | | | | |GpuSubtensor{:int64:} [id S]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | | |<CudaNdarrayType(float32, vector)> [id U]
<CudaNdarrayType(float32, vector)>
| | | | | | | | | | | | |TensorConstant{(1,) of 1000} [id V]
<TensorType(int64, (True,))>
| | | | | | | | | | | |Constant{500} [id W] <int64>
| | | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |CudaNdarrayConstant{[ 6.28318548]} [id Y]
<CudaNdarrayType(float32, (True,))>
| | | | | | | | | | |GpuSubtensor{int64::} [id Z]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |Constant{500} [id W] <int64>
| | | | | | | | | |GpuElemwise{Composite{(i0 * sin(i1))}}[(0, 0)] [id
BA] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | |TensorConstant{[100 10]} [id BB] <TensorType(int64,
vector)>
| | | | | | | |weights [id BC] <CudaNdarrayType(float32, matrix)>
| | | | | | |GpuDimShuffle{x,0} [id BD] <CudaNdarrayType(float32, row)>
''
| | | | | | |bias [id BE] <CudaNdarrayType(float32, vector)>
| | | | | |weights [id BF] <CudaNdarrayType(float32, matrix)>
| | | | |GpuDimShuffle{x,0} [id BG] <CudaNdarrayType(float32, row)> ''
| | | | |bias [id BH] <CudaNdarrayType(float32, vector)>
| | | |TensorConstant{[100 10 13 13]} [id BI] <TensorType(int64,
vector)>
| | |GpuAllocEmpty [id BJ] <CudaNdarrayType(float32, 4D)> ''
| | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>
| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>
| | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>
| | |GpuDnnConvDesc{border_mode='valid', subsample=(2, 2),
conv_mode='conv', precision='float32'} [id BO]
<CDataType{cudnnConvolutionDescriptor_t}> ''
| | | |MakeVector{dtype='int64'} [id BP] <TensorType(int64, vector)> ''
| | | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>
| | | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>
| | | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>
| | | |MakeVector{dtype='int64'} [id BQ] <TensorType(int64, vector)> ''
| | | |Shape_i{0} [id BR] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | |Shape_i{2} [id BS] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |Shape_i{3} [id BT] <TensorType(int64, scalar)> ''
| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | |Constant{1.0} [id BU] <float32>
| | |Constant{0.0} [id BV] <float32>
| |GpuDimShuffle{x,0,x,x} [id BW] <CudaNdarrayType(float32, (True, False,
True, True))> ''
| |bias [id BX] <CudaNdarrayType(float32, vector)>
|TensorConstant{False} [id BY] <TensorType(bool, scalar)>
- <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (50000, 784),
ElemSize: 4 Byte(s), TotalSize: 156800000 Byte(s)
- weights, Shared Input, Shape: (1200, 1690), ElemSize: 4 Byte(s),
TotalSize: 8112000 Byte(s)
- weights, Shared Input, Shape: (1250, 1200), ElemSize: 4 Byte(s),
TotalSize: 6000000 Byte(s)
- weights, Shared Input, Shape: (240, 1200), ElemSize: 4 Byte(s),
TotalSize: 1152000 Byte(s)
- GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0, Shape: (100, 1, 28,
28), ElemSize: 4 Byte(s), TotalSize: 313600 Byte(s)
- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (50000,),
ElemSize: 4 Byte(s), TotalSize: 200000 Byte(s)
- weights, Shared Input, Shape: (10, 1200), ElemSize: 4 Byte(s),
TotalSize: 48000 Byte(s)
- filterbank, Shared Input, Shape: (50, 20, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 36000 Byte(s)
36000 Byte(s)
6760 Byte(s)
4800 Byte(s)
4800 Byte(s)
4800 Byte(s)
(996,), ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)
- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (996,),
ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)
- filterbank, Shared Input, Shape: (20, 1, 5, 5), ElemSize: 4 Byte(s),
TotalSize: 2000 Byte(s)
960 Byte(s)
- filterbank, Shared Input, Shape: (10, 1, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 360 Byte(s)
- bias, Shared Input, Shape: (50,), ElemSize: 4 Byte(s), TotalSize: 200
Byte(s)
- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 50, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 200 Byte(s)
- bias, Shared Input, Shape: (20,), ElemSize: 4 Byte(s), TotalSize: 80
Byte(s)
- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 20, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 80 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (6,), ElemSize: 8 Byte(s),
TotalSize: 48 Byte(s)
- Join.0, Shape: (4,), ElemSize: 8 Byte(s), TotalSize: 32 Byte(s)
- TensorConstant{[100 20 12 12]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 1 28 28]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 10 13 13]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 28 28]}, Shape: (3,), ElemSize: 8 Byte(s),
TotalSize: 24 Byte(s)
16 Byte(s)
- TensorConstant{[ 100 1250]}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
16 Byte(s)
- Constant{3}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Constant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
8.0 Byte(s)
- index, Input, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{4}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{12}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{(1,) of 1000}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{(1,) of 100}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)
- TensorConstant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{100}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Constant{500}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Elemwise{Composite{(i0 + (((i1 + Composite{Switch(LT(i0, i1), i1,
i0)}(i2, i3)) - Switch(LT(Composite{Switch(LT(i0, i1), i1,
i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
Composite{Switch(LT(i0, i1), i1, i0)}(i2, i3)), Composite{Switch(LT(i0,
i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
(), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- TensorConstant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
- GpuCAReduce{add}{1,1}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
- Constant{1.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
- CudaNdarrayConstant{[-2.]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
4.0 Byte(s)
- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)
- CudaNdarrayConstant{[[ 0.5]]}, Shape: (1, 1), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
4.0 Byte(s)
- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)
- CudaNdarrayConstant{[ 6.28318548]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
- CudaNdarrayConstant{[[[[ 0.5]]]]}, Shape: (1, 1, 1, 1), ElemSize: 4
Byte(s), TotalSize: 4 Byte(s)
- Constant{0.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
- TensorConstant{10}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
- TensorConstant{20}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
- TensorConstant{0}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{5}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{3}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
1.0 Byte(s)
- TensorConstant{50}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
1.0 Byte(s)
1.0 Byte(s)
1.0 Byte(s)
- TensorConstant{False}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
TotalSize: 172726976.0 Byte(s) 0.161 GB
TotalSize inputs: 172377152.0 Byte(s) 0.161 GB
HINT: Re-running with most Theano optimization disabled could give you a
back-trace of when this node was created. This can be done with by setting
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano
optimizations can be disabled with 'optimizer=None'.
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Ragav Venkatesan
2017-03-08 21:26:32 UTC
Permalink
This is all the error I get with optimizer = fast_Compile and exception
verbosity = high.
Post by Frédéric Bastien
There is a run time assert in the graph that fail. To find where it got
created, try with this Theano flag. It will probably add in the error
optimizer=fast_compile
If not, try optimizer=None.
Fred
Post by Ragav Venkatesan
I have never seen this error and I am unable to understand it. Any help
will be much appreciated.
Theano 0.9rc3 using the cuda backend.
storage_map=getattr(self.fn, 'storage_map', None))
File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/gof/link.py",
line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py",
line 884, in __call__
self.fn() if output_subset is None else\
AssertionError: Theano Assert failed!
Apply node that caused the error: Assert{msg='Theano Assert
failed!'}(GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0,
TensorConstant{False})
Toposort index: 140
Inputs types: [CudaNdarrayType(float32, 4D), TensorType(bool, scalar)]
Inputs shapes: [(100, 1, 28, 28), ()]
Inputs strides: [(784, 0, 28, 1), ()]
Inputs values: ['not shown', array(False, dtype=bool)]
Inputs type_num: ['', 0]
Outputs clients: [[Assert{msg='Theano Assert failed!'}(Assert{msg='Theano
Assert failed!'}.0, TensorConstant{False})]]
Assert{msg='Theano Assert failed!'} [id A] <CudaNdarrayType(float32, 4D)>
''
|GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)] [id B]
<CudaNdarrayType(float32, 4D)> ''
| |GpuDnnConvGradI{algo='none', inplace=True} [id C]
<CudaNdarrayType(float32, 4D)> ''
| | |GpuContiguous [id D] <CudaNdarrayType(float32, 4D)> ''
| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | |GpuContiguous [id F] <CudaNdarrayType(float32, 4D)> ''
| | | |GpuReshape{4} [id G] <CudaNdarrayType(float32, 4D)> ''
| | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 + i2))))}}[(0,
1)] [id H] <CudaNdarrayType(float32, matrix)> ''
| | | | |CudaNdarrayConstant{[[ 0.5]]} [id I]
<CudaNdarrayType(float32, (True, True))>
| | | | |GpuDot22 [id J] <CudaNdarrayType(float32, matrix)> ''
| | | | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 +
i2))))}}[(0, 1)] [id K] <CudaNdarrayType(float32, matrix)> ''
| | | | | | |CudaNdarrayConstant{[[ 0.5]]} [id I]
<CudaNdarrayType(float32, (True, True))>
| | | | | | |GpuDot22 [id L] <CudaNdarrayType(float32, matrix)> ''
| | | | | | | |GpuReshape{2} [id M] <CudaNdarrayType(float32, matrix)>
''
| | | | | | | | |GpuJoin [id N] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |TensorConstant{0} [id O] <TensorType(int8, scalar)>
| | | | | | | | | |GpuElemwise{Composite{(i0 * cos(i1))},no_inplace}
[id P] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | |CudaNdarrayConstant{[-2.]} [id R]
<CudaNdarrayType(float32, (True,))>
| | | | | | | | | | | |GpuSubtensor{:int64:} [id S]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | | |<CudaNdarrayType(float32, vector)> [id U]
<CudaNdarrayType(float32, vector)>
| | | | | | | | | | | | |TensorConstant{(1,) of 1000} [id V]
<TensorType(int64, (True,))>
| | | | | | | | | | | |Constant{500} [id W] <int64>
| | | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |CudaNdarrayConstant{[ 6.28318548]} [id Y]
<CudaNdarrayType(float32, (True,))>
| | | | | | | | | | |GpuSubtensor{int64::} [id Z]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |Constant{500} [id W] <int64>
| | | | | | | | | |GpuElemwise{Composite{(i0 * sin(i1))}}[(0, 0)] [id
BA] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | |TensorConstant{[100 10]} [id BB] <TensorType(int64,
vector)>
| | | | | | | |weights [id BC] <CudaNdarrayType(float32, matrix)>
| | | | | | |GpuDimShuffle{x,0} [id BD] <CudaNdarrayType(float32,
row)> ''
| | | | | | |bias [id BE] <CudaNdarrayType(float32, vector)>
| | | | | |weights [id BF] <CudaNdarrayType(float32, matrix)>
| | | | |GpuDimShuffle{x,0} [id BG] <CudaNdarrayType(float32, row)> ''
| | | | |bias [id BH] <CudaNdarrayType(float32, vector)>
| | | |TensorConstant{[100 10 13 13]} [id BI] <TensorType(int64,
vector)>
| | |GpuAllocEmpty [id BJ] <CudaNdarrayType(float32, 4D)> ''
| | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>
| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>
| | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>
| | |GpuDnnConvDesc{border_mode='valid', subsample=(2, 2),
conv_mode='conv', precision='float32'} [id BO]
<CDataType{cudnnConvolutionDescriptor_t}> ''
| | | |MakeVector{dtype='int64'} [id BP] <TensorType(int64, vector)> ''
| | | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>
| | | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>
| | | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>
| | | |MakeVector{dtype='int64'} [id BQ] <TensorType(int64, vector)> ''
| | | |Shape_i{0} [id BR] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | |Shape_i{2} [id BS] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |Shape_i{3} [id BT] <TensorType(int64, scalar)> ''
| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | |Constant{1.0} [id BU] <float32>
| | |Constant{0.0} [id BV] <float32>
| |GpuDimShuffle{x,0,x,x} [id BW] <CudaNdarrayType(float32, (True,
False, True, True))> ''
| |bias [id BX] <CudaNdarrayType(float32, vector)>
|TensorConstant{False} [id BY] <TensorType(bool, scalar)>
- <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (50000, 784),
ElemSize: 4 Byte(s), TotalSize: 156800000 Byte(s)
- weights, Shared Input, Shape: (1200, 1690), ElemSize: 4 Byte(s),
TotalSize: 8112000 Byte(s)
- weights, Shared Input, Shape: (1250, 1200), ElemSize: 4 Byte(s),
TotalSize: 6000000 Byte(s)
- weights, Shared Input, Shape: (240, 1200), ElemSize: 4 Byte(s),
TotalSize: 1152000 Byte(s)
- GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0, Shape: (100, 1, 28,
28), ElemSize: 4 Byte(s), TotalSize: 313600 Byte(s)
- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (50000,),
ElemSize: 4 Byte(s), TotalSize: 200000 Byte(s)
- weights, Shared Input, Shape: (10, 1200), ElemSize: 4 Byte(s),
TotalSize: 48000 Byte(s)
- filterbank, Shared Input, Shape: (50, 20, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 36000 Byte(s)
- GpuContiguous.0, Shape: (50, 20, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 36000 Byte(s)
6760 Byte(s)
4800 Byte(s)
4800 Byte(s)
4800 Byte(s)
(996,), ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)
- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (996,),
ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)
- filterbank, Shared Input, Shape: (20, 1, 5, 5), ElemSize: 4 Byte(s),
TotalSize: 2000 Byte(s)
- weights, Shared Input, Shape: (240, 1), ElemSize: 4 Byte(s),
TotalSize: 960 Byte(s)
- filterbank, Shared Input, Shape: (10, 1, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 360 Byte(s)
- bias, Shared Input, Shape: (50,), ElemSize: 4 Byte(s), TotalSize: 200
Byte(s)
- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 50, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 200 Byte(s)
- bias, Shared Input, Shape: (20,), ElemSize: 4 Byte(s), TotalSize: 80
Byte(s)
- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 20, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 80 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (6,), ElemSize: 8 Byte(s),
TotalSize: 48 Byte(s)
- Join.0, Shape: (4,), ElemSize: 8 Byte(s), TotalSize: 32 Byte(s)
- TensorConstant{[100 20 12 12]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 1 28 28]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 10 13 13]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 28 28]}, Shape: (3,), ElemSize: 8 Byte(s),
TotalSize: 24 Byte(s)
- TensorConstant{(2,) of 0}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- TensorConstant{[ 100 1250]}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- TensorConstant{[100 10]}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- TensorConstant{(2,) of 2}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- TensorConstant{[100 -1]}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- Constant{3}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Constant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
8.0 Byte(s)
- index, Input, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{4}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{12}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{(1,) of 1000}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{(1,) of 100}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)
- TensorConstant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{100}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Constant{500}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Elemwise{Composite{(i0 + (((i1 + Composite{Switch(LT(i0, i1), i1,
i0)}(i2, i3)) - Switch(LT(Composite{Switch(LT(i0, i1), i1,
i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
Composite{Switch(LT(i0, i1), i1, i0)}(i2, i3)), Composite{Switch(LT(i0,
i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
(), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- TensorConstant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
4.0 Byte(s)
- Constant{1.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
- CudaNdarrayConstant{[-2.]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
4.0 Byte(s)
- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)
- CudaNdarrayConstant{[[ 0.5]]}, Shape: (1, 1), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
4.0 Byte(s)
- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)
- CudaNdarrayConstant{[ 6.28318548]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
- CudaNdarrayConstant{[[[[ 0.5]]]]}, Shape: (1, 1, 1, 1), ElemSize: 4
Byte(s), TotalSize: 4 Byte(s)
- Constant{0.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
- TensorConstant{10}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
- TensorConstant{20}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
- TensorConstant{0}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{5}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{3}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
1.0 Byte(s)
- TensorConstant{50}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
1.0 Byte(s)
1.0 Byte(s)
1.0 Byte(s)
- TensorConstant{False}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
TotalSize: 172726976.0 Byte(s) 0.161 GB
TotalSize inputs: 172377152.0 Byte(s) 0.161 GB
HINT: Re-running with most Theano optimization disabled could give you a
back-trace of when this node was created. This can be done with by setting
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano
optimizations can be disabled with 'optimizer=None'.
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Frédéric Bastien
2017-03-10 13:34:03 UTC
Permalink
Try with optimizer=None as I wrote.
Post by Ragav Venkatesan
This is all the error I get with optimizer = fast_Compile and exception
verbosity = high.
There is a run time assert in the graph that fail. To find where it got
created, try with this Theano flag. It will probably add in the error
optimizer=fast_compile
If not, try optimizer=None.
Fred
I have never seen this error and I am unable to understand it. Any help
will be much appreciated.
Theano 0.9rc3 using the cuda backend.
storage_map=getattr(self.fn, 'storage_map', None))
File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/gof/link.py",
line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File
"/Users/ragav/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py",
line 884, in __call__
self.fn() if output_subset is None else\
AssertionError: Theano Assert failed!
Apply node that caused the error: Assert{msg='Theano Assert
failed!'}(GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0,
TensorConstant{False})
Toposort index: 140
Inputs types: [CudaNdarrayType(float32, 4D), TensorType(bool, scalar)]
Inputs shapes: [(100, 1, 28, 28), ()]
Inputs strides: [(784, 0, 28, 1), ()]
Inputs values: ['not shown', array(False, dtype=bool)]
Inputs type_num: ['', 0]
Outputs clients: [[Assert{msg='Theano Assert failed!'}(Assert{msg='Theano
Assert failed!'}.0, TensorConstant{False})]]
Assert{msg='Theano Assert failed!'} [id A] <CudaNdarrayType(float32, 4D)>
''
|GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)] [id B]
<CudaNdarrayType(float32, 4D)> ''
| |GpuDnnConvGradI{algo='none', inplace=True} [id C]
<CudaNdarrayType(float32, 4D)> ''
| | |GpuContiguous [id D] <CudaNdarrayType(float32, 4D)> ''
| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | |GpuContiguous [id F] <CudaNdarrayType(float32, 4D)> ''
| | | |GpuReshape{4} [id G] <CudaNdarrayType(float32, 4D)> ''
| | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 + i2))))}}[(0,
1)] [id H] <CudaNdarrayType(float32, matrix)> ''
| | | | |CudaNdarrayConstant{[[ 0.5]]} [id I] <CudaNdarrayType(float32,
(True, True))>
| | | | |GpuDot22 [id J] <CudaNdarrayType(float32, matrix)> ''
| | | | | |GpuElemwise{Composite{(i0 * ((i1 + i2) + Abs((i1 +
i2))))}}[(0, 1)] [id K] <CudaNdarrayType(float32, matrix)> ''
| | | | | | |CudaNdarrayConstant{[[ 0.5]]} [id I]
<CudaNdarrayType(float32, (True, True))>
| | | | | | |GpuDot22 [id L] <CudaNdarrayType(float32, matrix)> ''
| | | | | | | |GpuReshape{2} [id M] <CudaNdarrayType(float32, matrix)>
''
| | | | | | | | |GpuJoin [id N] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |TensorConstant{0} [id O] <TensorType(int8, scalar)>
| | | | | | | | | |GpuElemwise{Composite{(i0 * cos(i1))},no_inplace}
[id P] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | |CudaNdarrayConstant{[-2.]} [id R]
<CudaNdarrayType(float32, (True,))>
| | | | | | | | | | | |GpuSubtensor{:int64:} [id S]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | | | |<CudaNdarrayType(float32, vector)> [id U]
<CudaNdarrayType(float32, vector)>
| | | | | | | | | | | | |TensorConstant{(1,) of 1000} [id V]
<TensorType(int64, (True,))>
| | | | | | | | | | | |Constant{500} [id W] <int64>
| | | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |CudaNdarrayConstant{[ 6.28318548]} [id Y]
<CudaNdarrayType(float32, (True,))>
| | | | | | | | | | |GpuSubtensor{int64::} [id Z]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |GPU_mrg_uniform{CudaNdarrayType(float32,
vector),inplace}.1 [id T] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | | |Constant{500} [id W] <int64>
| | | | | | | | | |GpuElemwise{Composite{(i0 * sin(i1))}}[(0, 0)] [id
BA] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |GpuElemwise{Composite{sqrt((i0 *
log(i1)))},no_inplace} [id Q] <CudaNdarrayType(float32, vector)> ''
| | | | | | | | | |GpuElemwise{Mul}[(0, 1)] [id X]
<CudaNdarrayType(float32, vector)> ''
| | | | | | | | |TensorConstant{[100 10]} [id BB] <TensorType(int64,
vector)>
| | | | | | | |weights [id BC] <CudaNdarrayType(float32, matrix)>
| | | | | | |GpuDimShuffle{x,0} [id BD] <CudaNdarrayType(float32, row)>
''
| | | | | | |bias [id BE] <CudaNdarrayType(float32, vector)>
| | | | | |weights [id BF] <CudaNdarrayType(float32, matrix)>
| | | | |GpuDimShuffle{x,0} [id BG] <CudaNdarrayType(float32, row)> ''
| | | | |bias [id BH] <CudaNdarrayType(float32, vector)>
| | | |TensorConstant{[100 10 13 13]} [id BI] <TensorType(int64,
vector)>
| | |GpuAllocEmpty [id BJ] <CudaNdarrayType(float32, 4D)> ''
| | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>
| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>
| | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>
| | |GpuDnnConvDesc{border_mode='valid', subsample=(2, 2),
conv_mode='conv', precision='float32'} [id BO]
<CDataType{cudnnConvolutionDescriptor_t}> ''
| | | |MakeVector{dtype='int64'} [id BP] <TensorType(int64, vector)> ''
| | | | |TensorConstant{100} [id BK] <TensorType(int64, scalar)>
| | | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | | |TensorConstant{28} [id BM] <TensorType(int64, scalar)>
| | | | |TensorConstant{28} [id BN] <TensorType(int64, scalar)>
| | | |MakeVector{dtype='int64'} [id BQ] <TensorType(int64, vector)> ''
| | | |Shape_i{0} [id BR] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |Shape_i{1} [id BL] <TensorType(int64, scalar)> ''
| | | |Shape_i{2} [id BS] <TensorType(int64, scalar)> ''
| | | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | | |Shape_i{3} [id BT] <TensorType(int64, scalar)> ''
| | | |filterbank [id E] <CudaNdarrayType(float32, 4D)>
| | |Constant{1.0} [id BU] <float32>
| | |Constant{0.0} [id BV] <float32>
| |GpuDimShuffle{x,0,x,x} [id BW] <CudaNdarrayType(float32, (True, False,
True, True))> ''
| |bias [id BX] <CudaNdarrayType(float32, vector)>
|TensorConstant{False} [id BY] <TensorType(bool, scalar)>
- <CudaNdarrayType(float32, matrix)>, Shared Input, Shape: (50000, 784),
ElemSize: 4 Byte(s), TotalSize: 156800000 Byte(s)
- weights, Shared Input, Shape: (1200, 1690), ElemSize: 4 Byte(s),
TotalSize: 8112000 Byte(s)
- weights, Shared Input, Shape: (1250, 1200), ElemSize: 4 Byte(s),
TotalSize: 6000000 Byte(s)
- weights, Shared Input, Shape: (240, 1200), ElemSize: 4 Byte(s),
TotalSize: 1152000 Byte(s)
- GpuElemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0, Shape: (100, 1, 28,
28), ElemSize: 4 Byte(s), TotalSize: 313600 Byte(s)
- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (50000,),
ElemSize: 4 Byte(s), TotalSize: 200000 Byte(s)
- weights, Shared Input, Shape: (10, 1200), ElemSize: 4 Byte(s),
TotalSize: 48000 Byte(s)
- filterbank, Shared Input, Shape: (50, 20, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 36000 Byte(s)
36000 Byte(s)
6760 Byte(s)
4800 Byte(s)
4800 Byte(s)
4800 Byte(s)
(996,), ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)
- <CudaNdarrayType(float32, vector)>, Shared Input, Shape: (996,),
ElemSize: 4 Byte(s), TotalSize: 3984 Byte(s)
- filterbank, Shared Input, Shape: (20, 1, 5, 5), ElemSize: 4 Byte(s),
TotalSize: 2000 Byte(s)
960 Byte(s)
- filterbank, Shared Input, Shape: (10, 1, 3, 3), ElemSize: 4 Byte(s),
TotalSize: 360 Byte(s)
- bias, Shared Input, Shape: (50,), ElemSize: 4 Byte(s), TotalSize: 200
Byte(s)
- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 50, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 200 Byte(s)
- bias, Shared Input, Shape: (20,), ElemSize: 4 Byte(s), TotalSize: 80
Byte(s)
- GpuDimShuffle{x,0,x,x}.0, Shape: (1, 20, 1, 1), ElemSize: 4 Byte(s),
TotalSize: 80 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (6,), ElemSize: 8 Byte(s),
TotalSize: 48 Byte(s)
- Join.0, Shape: (4,), ElemSize: 8 Byte(s), TotalSize: 32 Byte(s)
- TensorConstant{[100 20 12 12]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 1 28 28]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 10 13 13]}, Shape: (4,), ElemSize: 8 Byte(s),
TotalSize: 32 Byte(s)
- TensorConstant{[100 28 28]}, Shape: (3,), ElemSize: 8 Byte(s),
TotalSize: 24 Byte(s)
16 Byte(s)
- TensorConstant{[ 100 1250]}, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
16 Byte(s)
- MakeVector{dtype='int64'}.0, Shape: (2,), ElemSize: 8 Byte(s),
TotalSize: 16 Byte(s)
16 Byte(s)
- Constant{3}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Shape_i{1}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{-1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Constant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
8.0 Byte(s)
- index, Input, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{4}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{12}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Shape_i{0}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[1] <
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{(1,) of 1000}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[2] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Assert{msg='The convolution would produce an invalid shape (dim[3] <=
0).'}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{(1,) of 100}, Shape: (1,), ElemSize: 8 Byte(s),
TotalSize: 8 Byte(s)
- TensorConstant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{100}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Constant{500}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- TensorConstant{28}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- Elemwise{Composite{(i0 + (((i1 + Composite{Switch(LT(i0, i1), i1,
i0)}(i2, i3)) - Switch(LT(Composite{Switch(LT(i0, i1), i1,
i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
Composite{Switch(LT(i0, i1), i1, i0)}(i2, i3)), Composite{Switch(LT(i0,
i1), i1, i0)}(Composite{Switch(GE(i0, i1), i1, i0)}(i4, i2), i3),
(), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- TensorConstant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0
Byte(s)
- TensorConstant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- Constant{5}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
- GpuCAReduce{add}{1,1}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
- Constant{1.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
- CudaNdarrayConstant{[-2.]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
- GpuSubtensor{int64}.0, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0
Byte(s)
4.0 Byte(s)
- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)
- CudaNdarrayConstant{[[ 0.5]]}, Shape: (1, 1), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
4.0 Byte(s)
- bias, Shared Input, Shape: (1,), ElemSize: 4 Byte(s), TotalSize: 4
Byte(s)
- CudaNdarrayConstant{[ 6.28318548]}, Shape: (1,), ElemSize: 4 Byte(s),
TotalSize: 4 Byte(s)
- CudaNdarrayConstant{[[[[ 0.5]]]]}, Shape: (1, 1, 1, 1), ElemSize: 4
Byte(s), TotalSize: 4 Byte(s)
- Constant{0.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
- TensorConstant{10}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
- TensorConstant{20}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
- TensorConstant{0}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{5}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{3}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
- TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
1.0 Byte(s)
- TensorConstant{50}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
1.0 Byte(s)
1.0 Byte(s)
1.0 Byte(s)
- TensorConstant{False}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0
Byte(s)
TotalSize: 172726976.0 Byte(s) 0.161 GB
TotalSize inputs: 172377152.0 Byte(s) 0.161 GB
HINT: Re-running with most Theano optimization disabled could give you a
back-trace of when this node was created. This can be done with by setting
the Theano flag 'optimizer=fast_compile'. If that does not work, Theano
optimizations can be disabled with 'optimizer=None'.
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Sebastian B
2017-06-13 09:05:21 UTC
Permalink
not sure if this is related, but a got a (similar?) error:

"The convolution would produce an invalid shape (dim[3] <= 0)"

apparently some of my input data was smaller than the filter_size of the
convolution layer. In my case, padding these few unusually small inputs
helped
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...