votes up 1

No gradients provided for any variable: %s.

Package:
Exception Class:
ValueError

Raise code

"Gradient must be a Tensor, IndexedSlices, or None: %s" % g)
      p = _get_processor(v)
      converted_grads_and_vars.append((g, v, p))

    converted_grads_and_vars = tuple(converted_grads_and_vars)
    var_list = [v for g, v, _ in converted_grads_and_vars if g is not None]
    if not var_list:
      raise ValueError("No gradients provided for any variable: %s." %
                       ([str(v) for _, v, _ in converted_grads_and_vars],))
    with ops.init_scope():
      self._create_slots(var_list)
    update_ops = []
    with ops.name_scope(name, self._name, skip_on_eager=False) as name:
      self._prepare()
      for grad, var, processor in converted_grads_and_vars:
🙏 Scream for help to Ukraine
Today, 2nd July 2022, Russia continues bombing and firing Ukraine. Don't trust Russia, they are bombing us and brazenly lying in same time they are not doing this 😠, civilians and children are dying too! We are screaming and asking exactly you to help us, we want to survive, our families, children, older ones.
Please spread the information, and ask your governemnt to stop Russia by any means. We promise to work extrahard after survival to make the world safer place for all.

Ways to fix

votes up 1 votes down

This happens when a wrongly calculated cost is given the minimize method of an optimizer. This means the value given doesn't support gradient and causes this exception.

Here is a sample code to reproduce the exception:

Notice how the cross entropy loss is calculated,

the arguments label and activation are interchanged.

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

X = tf.placeholder('float', [None, 20])
Y = tf.placeholder('float', [None, 10])
W = tf.Variable(tf.zeros([20, 10]))
b = tf.Variable(tf.zeros([10]))

activation = tf.add(tf.matmul(X, W),b)
Y =tf.placeholder('float', [None, 10])
loss = tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)
print(loss)
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)

The exception output:

---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-535b7e06c207> in <module>()  11 loss = tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)  12 print(loss) ---> 13 optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) 
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/optimizer.py in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)  486 "No gradients provided for any variable, check your graph for ops"  487 " that do not support gradients, between variables %s and loss %s." % --> 488 ([str(v) for _, v in grads_and_vars], loss))  489   490 return self.apply_gradients(grads_and_vars, global_step=global_step, 
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'Variable:0' shape=(20, 10) dtype=float32_ref>", "<tf.Variable 'Variable_1:0' shape=(10,) dtype=float32_ref>", "<tf.Variable 'Variable_2:0' shape=(20, 10) dtype=float32_ref>", "<tf.Variable 'Variable_3:0' shape=(10,) dtype=float32_ref>", "<tf.Variable 'Variable_4:0' shape=(20, 10) dtype=float32_ref>", "<tf.Variable 'Variable_5:0' shape=(10,) dtype=float32_ref>"] and loss Tensor("softmax_cross_entropy_with_logits_sg_2/Reshape_2:0", shape=(?,), dtype=float32).

How to fix it:

Make sure the Tensor given to the minimize method supports gradients.

In this case the arguments should be.

labels: Each row labels[i] must be a valid probability distribution.

logits: Unscaled log probabilities.

Here is a working version of the above code:

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()


X = tf.placeholder('float', [None, 20])
Y = tf.placeholder('float', [None, 10])
W = tf.Variable(tf.zeros([20, 10]))
b = tf.Variable(tf.zeros([10]))


activation = tf.add(tf.matmul(X, W),b)
Y =tf.placeholder('float', [None, 10])
loss = tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=activation)
print(loss)
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
Feb 24, 2022 kellemnegasi answer
kellemnegasi 30.0k

Add a possible fix

Please authorize to post fix