ephi5757 via theano-users
2017-08-16 21:49:37 UTC
I'm retraining my implementation of the neural network model AlexNet in
Theano and not long after it initializes the program crashes with the error
"ValueError: dimension mismatch in x,y_idx arguments." see traceback below.
Any comments or suggestions that you may offer would be helpful. Note that
the only discernible difference in this training in comparison to the
previous one is that I am using 5003 .hkl training image data files instead
of 5004. Nevertheless, I don't think this value needs to be fixed.
Looking forward to your reply.
Arnold
_______________________________________________________________.
C:\SciSoft\Git\theano_alexnet>python train.py THEANO_FLAGS=mode=FAST_RUN,
floatX=float32
Using gpu device 0: Quadro K4000M (CNMeM is disabled, CuDNN 3007)
Using gpu device 0: Quadro K4000M (CNMeM is disabled, CuDNN 3007)
... building the model
conv (cudnn) layer with shape_in: (3, 227, 227, 1)
conv (cudnn) layer with shape_in: (96, 27, 27, 1)
conv (cudnn) layer with shape_in: (256, 13, 13, 1)
conv (cudnn) layer with shape_in: (384, 13, 13, 1)
conv (cudnn) layer with shape_in: (384, 13, 13, 1)
fc layer with num_in: 9216 num_out: 4096
dropout layer with P_drop: 0.5
fc layer with num_in: 4096 num_out: 4096
dropout layer with P_drop: 0.5
softmax layer with num_in: 4096 num_out: 1000
... training
______________________________________________________________________________.
Traceback (most recent call last):
File
"C:\SciSoft\WinPython-64bit-2.7.9.4\python-2.7.9.amd64\lib\multiprocessing\process.py",
line 266, in _bootstrap
self.run()
File
"C:\SciSoft\WinPython-64bit-2.7.9.4\python-2.7.9.amd64\lib\multiprocessing\process.py",
line 120, in run
self._target(*self._args, **self._kwargs)
File "C:\SciSoft\Git\theano_alexnet\train.py", line 128, in train_net
recv_queue=load_recv_queue)
File "C:\SciSoft\Git\theano_alexnet\train_funcs.py", line 171, in
train_model_wrap
cost_ij = train_model()
File "c:\scisoft\git\theano\theano\compile\function_module.py", line 871,
in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "c:\scisoft\git\theano\theano\gof\link.py", line 314, in
raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "c:\scisoft\git\theano\theano\compile\function_module.py", line 859,
in __call__
outputs = self.fn()
ValueError: dimension mismatch in x,y_idx arguments
Apply node that caused the error:
GpuCrossentropySoftmaxArgmax1HotWithBias(GpuDot22.0,
<CudaNdarrayType(float32, vector)>, GpuFromHost.0)
Toposort index: 298
Inputs types: [CudaNdarrayType(float32, matrix), CudaNdarrayType(float32,
vector), CudaNdarrayType(float32, vector)]
Inputs shapes: [(256, 1000), (1000,), (1,)]
Inputs strides: [(1000, 1), (1,), (0,)]
Inputs values: ['not shown', 'not shown', CudaNdarray([ 275.])]
Outputs clients:
[[GpuCAReduce{add}{1}(GpuCrossentropySoftmaxArgmax1HotWithBias.0)],
[GpuCrossentropySoftmax1HotWithBiasDx(GpuElemwise{Inv}[(0, 0)].0,
GpuCrossentropySoftmaxArgmax1HotWithBias.1, GpuFromHost.0)], []]
.
_____________________________________________________________________.
Theano and not long after it initializes the program crashes with the error
"ValueError: dimension mismatch in x,y_idx arguments." see traceback below.
Any comments or suggestions that you may offer would be helpful. Note that
the only discernible difference in this training in comparison to the
previous one is that I am using 5003 .hkl training image data files instead
of 5004. Nevertheless, I don't think this value needs to be fixed.
Looking forward to your reply.
Arnold
_______________________________________________________________.
C:\SciSoft\Git\theano_alexnet>python train.py THEANO_FLAGS=mode=FAST_RUN,
floatX=float32
Using gpu device 0: Quadro K4000M (CNMeM is disabled, CuDNN 3007)
Using gpu device 0: Quadro K4000M (CNMeM is disabled, CuDNN 3007)
... building the model
conv (cudnn) layer with shape_in: (3, 227, 227, 1)
conv (cudnn) layer with shape_in: (96, 27, 27, 1)
conv (cudnn) layer with shape_in: (256, 13, 13, 1)
conv (cudnn) layer with shape_in: (384, 13, 13, 1)
conv (cudnn) layer with shape_in: (384, 13, 13, 1)
fc layer with num_in: 9216 num_out: 4096
dropout layer with P_drop: 0.5
fc layer with num_in: 4096 num_out: 4096
dropout layer with P_drop: 0.5
softmax layer with num_in: 4096 num_out: 1000
... training
______________________________________________________________________________.
Traceback (most recent call last):
File
"C:\SciSoft\WinPython-64bit-2.7.9.4\python-2.7.9.amd64\lib\multiprocessing\process.py",
line 266, in _bootstrap
self.run()
File
"C:\SciSoft\WinPython-64bit-2.7.9.4\python-2.7.9.amd64\lib\multiprocessing\process.py",
line 120, in run
self._target(*self._args, **self._kwargs)
File "C:\SciSoft\Git\theano_alexnet\train.py", line 128, in train_net
recv_queue=load_recv_queue)
File "C:\SciSoft\Git\theano_alexnet\train_funcs.py", line 171, in
train_model_wrap
cost_ij = train_model()
File "c:\scisoft\git\theano\theano\compile\function_module.py", line 871,
in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "c:\scisoft\git\theano\theano\gof\link.py", line 314, in
raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "c:\scisoft\git\theano\theano\compile\function_module.py", line 859,
in __call__
outputs = self.fn()
ValueError: dimension mismatch in x,y_idx arguments
Apply node that caused the error:
GpuCrossentropySoftmaxArgmax1HotWithBias(GpuDot22.0,
<CudaNdarrayType(float32, vector)>, GpuFromHost.0)
Toposort index: 298
Inputs types: [CudaNdarrayType(float32, matrix), CudaNdarrayType(float32,
vector), CudaNdarrayType(float32, vector)]
Inputs shapes: [(256, 1000), (1000,), (1,)]
Inputs strides: [(1000, 1), (1,), (0,)]
Inputs values: ['not shown', 'not shown', CudaNdarray([ 275.])]
Outputs clients:
[[GpuCAReduce{add}{1}(GpuCrossentropySoftmaxArgmax1HotWithBias.0)],
[GpuCrossentropySoftmax1HotWithBiasDx(GpuElemwise{Inv}[(0, 0)].0,
GpuCrossentropySoftmaxArgmax1HotWithBias.1, GpuFromHost.0)], []]
.
_____________________________________________________________________.
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.