Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The declaration of the'bucket_boundaries' variable in the data_handler.get_dataset() function is omitted, resulting in an error. #77

Open
kampores opened this issue Dec 23, 2020 · 0 comments

Comments

@kampores
Copy link

Hello, I have tried Compute alignment dataset part, but I cannot run extract_durations.py .
I had run following command

python extract_durations.py --config config/wavernn --binary --fix_jumps --fill_mode_next

But, I got the following error:

ERROR: model's reduction factor is greater than 1, check config. (r=10 Traceback (most recent call last): File "/home/aiuser8/PycharmProjects/TransformerTTS/extract_durations.py", line 77, in <module> dataset = data_handler.get_dataset(bucket_batch_sizes=[64, 42, 32, 25, 21, 18, 16, 14, 12, 11, 1], shuffle=False, drop_remainder=False) TypeError: get_dataset() missing 1 required positional argument: 'bucket_boundaries'

The 'data_handler.get_dataset()' function is declared at line 80 in preprocessing/datasets.py .

def get_dataset(self, bucket_batch_sizes, bucket_boundaries, shuffle=True, drop_remainder=False):

This function should declare the variables 'bucket_batch_sizes' and 'bucket_boundaries'. However, line 77 of 'extract_durations.py' has omitted 'bucket_boundaries' variable.

I tried insert 'bucket_boundaries=config['bucket_boundaries'],' in the code like this:

dataset = data_handler.get_dataset(bucket_batch_sizes=[64, 42, 32, 25, 21, 18, 16, 14, 12, 11, 1], bucket_boundaries=config['bucket_boundaries'], shuffle=False, drop_remainder=False)

But I faced the following error.

Traceback (most recent call last): File "/home/aiuser8/PycharmProjects/TransformerTTS/extract_durations.py", line 114, in <module> pred_mel = tf.expand_dims(1 - tf.squeeze(create_mel_padding_mask(mel_batch[:, 1:, :])), -1) * pred_mel File "/home/aiuser8/PycharmProjects/TransformerTTS/venv/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1164, in binary_op_wrapper return func(x, y, name=name) File "/home/aiuser8/PycharmProjects/TransformerTTS/venv/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1496, in _mul_dispatch return multiply(x, y, name=name) File "/home/aiuser8/PycharmProjects/TransformerTTS/venv/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/home/aiuser8/PycharmProjects/TransformerTTS/venv/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 518, in multiply return gen_math_ops.mul(x, y, name) File "/home/aiuser8/PycharmProjects/TransformerTTS/venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6068, in mul _ops.raise_from_not_ok_status(e, name) File "/home/aiuser8/PycharmProjects/TransformerTTS/venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 6862, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [21,593,1] vs. [21,600,80] [Op:Mul] 2020-12-23 13:44:32.902541: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]]

Incompatible shapes: [21,593,1] vs. [21,600,80] message, It seems that the wrong array shape was used.
What is best declares of 'bucket_boundaries' variables?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant