votes up 5

Trying to create optimizer slot variable under the scope for tf.distribute.Strategy ((param1)), which is different from the scope used for the original variable ((param1)). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope

Package:
Exception Class:
ValueError

Raise code

  initializer, shape=slot_shape, dtype=var.dtype)
      else:
        initial_value = initializer

      with self._distribution_strategy_scope():
        strategy = distribute_ctx.get_strategy()
        if not strategy.extended.variable_created_in_scope(var):
          raise ValueError(
              "Trying to create optimizer slot variable under the scope for "
              "tf.distribute.Strategy ({}), which is different from the scope "
              "used for the original variable ({}). Make sure the slot "
              "variables are created under the same strategy scope. This may "
              "happen if you're restoring from a checkpoint outside the scope"
              .format(strategy, var))

        wi
ūüė≤  Walkingbet is Android app that pays you real bitcoins for a walking. Withdrawable real money bonus is available now, hurry up! ūüö∂

Ways to fix

votes up 1 votes down

Error code:

import tensorflow as tf
from tensorflow.python.keras.optimizer_v2 import optimizer_v2
from tensorflow.python.ops import variables
from tensorflow.python.framework import dtypes

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
  x = variables.Variable([[1.0]], dtype=dtypes.float32) #<--- Inside strategy scope
slot_shape = [2]
optimizer_1 = optimizer_v2.OptimizerV2(name='test')
optimizer_1.add_slot(x, 'test_slot', 'ones', shape=slot_shape)

Explanation:

When we adding a slot to the optimizer, it creates a default strategy.

strategy = distribute_ctx.get_strategy()

But if we want to change strategy, we can define strategy before our Variable. We used MirroredStrategy.

So when we adding our Variable to our strategy, As you know it uses a different strategy than optimizer use.

Our variable strategy is MirroredStrategy but the optimizer use tf.distribute.Strategy

To solve it, we need to use both of them in the same strategy either inside of scope or outside of scope doesn't matter.

Fix code:

import tensorflow as tf
from tensorflow.python.keras.optimizer_v2 import optimizer_v2
from tensorflow.python.ops import variables
from tensorflow.python.framework import dtypes

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
  x = variables.Variable([[1.0]], dtype=dtypes.float32)
  slot_shape = [2]
  optimizer_1 = optimizer_v2.OptimizerV2(name='test')
  optimizer_1.add_slot(x, 'test_slot', 'ones', shape=slot_shape)
  # Both are inside scope optimizer and variable
Jul 09, 2021 anonim answer
anonim 13.0k

Add a possible fix

Please authorize to post fix