Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExponentialCyclicalLearningRate - TypeError: Cannot convert 1.0 to EagerTensor of dtype int64 #2799

Open
ImSo3K opened this issue Jan 15, 2023 · 0 comments

Comments

@ImSo3K
Copy link

ImSo3K commented Jan 15, 2023

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow version and how it was installed (source or binary): 2.10.0 binary (with pip)
  • TensorFlow-Addons version and how it was installed (source or binary): 0.19.0 binary (with pip)
  • Python version: 3.9.7
  • Is GPU used? (yes/no): no

Describe the bug

When I use ExponentialCyclicalLearningRate and I fit m model with a TensorBoard instance, I get the following error
TypeError: Cannot convert 1.0 to EagerTensor of dtype int64

After a little bit of debugging, I have found out that the issue is here:

def __call__(self, step):
with tf.name_scope(self.name or "CyclicalLearningRate"):
initial_learning_rate = tf.convert_to_tensor(
self.initial_learning_rate, name="initial_learning_rate"
)
dtype = initial_learning_rate.dtype
maximal_learning_rate = tf.cast(self.maximal_learning_rate, dtype)
step_size = tf.cast(self.step_size, dtype)
step_as_dtype = tf.cast(step, dtype)
cycle = tf.floor(1 + step_as_dtype / (2 * step_size))
x = tf.abs(step_as_dtype / step_size - 2 * cycle + 1)
mode_step = cycle if self.scale_mode == "cycle" else step
return initial_learning_rate + (
maximal_learning_rate - initial_learning_rate
) * tf.maximum(tf.cast(0, dtype), (1 - x)) * self.scale_fn(mode_step)

Specifically at:

            return initial_learning_rate + (
                maximal_learning_rate - initial_learning_rate
            ) * tf.maximum(tf.cast(0, dtype), (1 - x)) * self.scale_fn(mode_step)

It seems that self.scale_fn(mode_step) fails internally when trying to compute self.gamma ** x when x (mode_step) is of type int64.
I saw a similar issue here #2593 with some fix that was supposedly about to me merged but since I'm using the latest version I guess that the merge wasn't implemented.

Code to reproduce the issue

Same as #2593

Potential Fix

Change self.scale_fn(mode_step) to self.scale_fn(step_as_dtype) since it is of type float32, it does work for that specific line, I just don't know if it can potentially break future dependencies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant