Skip to content Skip to sidebar Skip to footer

Tensorflow: Shared Variables Error With Simple LSTM Network

I am trying to build a simplest possible LSTM network. Just want it to predict the next value in the sequence np_input_data. import tensorflow as tf from tensorflow.python.ops imp

Solution 1:

The call to lstm here:

for i in range(num_steps-1):
  output, state = lstm(tf_inputs[i], state)

will try to create variables with the same name each iteration unless you tell it otherwise. You can do this using tf.variable_scope

with tf.variable_scope("myrnn") as scope:
  for i in range(num_steps-1):
    if i > 0:
      scope.reuse_variables()
    output, state = lstm(tf_inputs[i], state)     

The first iteration creates the variables that represent your LSTM parameters and every subsequent iteration (after the call to reuse_variables) will just look them up in the scope by name.


Solution 2:

I ran into a similar issue in TensorFlow v1.0.1 using tf.nn.dynamic_rnn. It turned out that the error only arose if I had to re-train or cancel in the middle of training and restart my training process. Basically the graph was not being reset.

Long story short, throw a tf.reset_default_graph() at the start of your code and it should help. At least when using tf.nn.dynamic_rnn and retraining.


Solution 3:

Use tf.nn.rnn or tf.nn.dynamic_rnn which do this, and a lot of other nice things, for you.


Post a Comment for "Tensorflow: Shared Variables Error With Simple LSTM Network"