Skip to content

Commit

Permalink
Fixing pipeline bug caused by slow tokenizer being different.
Browse files Browse the repository at this point in the history
  • Loading branch information
Narsil authored and Rocketknight1 committed Oct 14, 2022
1 parent 357877a commit f5fbfb9
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/transformers/pipelines/fill_mask.py
Expand Up @@ -138,7 +138,7 @@ def postprocess(self, model_outputs, top_k=5, target_ids=None):
# For multi masks though, the other [MASK] would be removed otherwise
# making the output look odd, so we add them back
sequence = self.tokenizer.decode(tokens, skip_special_tokens=single_mask)
proposition = {"score": v, "token": p, "token_str": self.tokenizer.decode(p), "sequence": sequence}
proposition = {"score": v, "token": p, "token_str": self.tokenizer.decode([p]), "sequence": sequence}
row.append(proposition)
result.append(row)
if single_mask:
Expand Down

0 comments on commit f5fbfb9

Please sign in to comment.