Alexander Botev
2017-07-22 00:43:11 UTC
So the scan checkpointing seems very ineteresting from the prespective that
it can be used for things like learning-to-learn.
However, my question is can we tell Theano which part of each N-th
iteration it to store and which not? For instance in the learning-to-learn
framework where we unroll SGD
the optimal would be to store only the "updated" parameters which get pass
to the next time step, rather than the whole computation. Is it possible to
achieve something like that?
it can be used for things like learning-to-learn.
However, my question is can we tell Theano which part of each N-th
iteration it to store and which not? For instance in the learning-to-learn
framework where we unroll SGD
the optimal would be to store only the "updated" parameters which get pass
to the next time step, rather than the whole computation. Is it possible to
achieve something like that?
--
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
---
You received this message because you are subscribed to the Google Groups "theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.