Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix SyntaxWarning #954

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion returnn/tf/layers/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1380,7 +1380,7 @@ def get_constraints_value(self):
c += self.spatial_smoothing * self.get_output_spatial_smoothing_energy()
if self.darc1:
c += self.darc1 * self.get_darc1()
if c is 0:
if c == 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is wrong in case c is a tf.Tensor or numpy.ndarray.

Same also for all the other changes.

return None
return c

Expand Down
2 changes: 1 addition & 1 deletion returnn/tf/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -1618,7 +1618,7 @@ def inv_reduce_sum(x, name):
if should_train or should_eval:
# These values are cached internally and the graph nodes are created on the first call.
loss = self.get_objective()
if loss is 0:
if loss == 0:
loss = tf_util.global_tensor(lambda: tf.constant(0.0), name="zero_loss")
else: # non-constant-zero loss
assert self.losses_dict
Expand Down
4 changes: 2 additions & 2 deletions returnn/tf/util/basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2021,7 +2021,7 @@ def expand_dims_unbroadcast(x, axis, dim, name="expand_dims_unbroadcast"):
with tf.name_scope(name):
x = tf.convert_to_tensor(x)
x = tf.expand_dims(x, axis)
if dim is not 1:
if dim != 1:
new_ndim = x.get_shape().ndims
assert new_ndim is not None, "not implemented otherwise yet"
assert isinstance(axis, int), "not implemented otherwise yet"
Expand Down Expand Up @@ -5547,7 +5547,7 @@ def tensor_array_stack(ta, start=0, stop=None, name="TensorArrayStack"):
:param str name:
:rtype: tf.Tensor
"""
if start is 0 and stop is None:
if start == 0 and stop is None:
return ta.stack(name=name)
with tf_compat.v1.colocate_with(_tensor_array_ref(ta)):
with tf.name_scope(name):
Expand Down